00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2390 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3651 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.182 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.183 The recommended git tool is: git 00:00:00.183 using credential 00000000-0000-0000-0000-000000000002 00:00:00.187 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.199 Fetching changes from the remote Git repository 00:00:00.201 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.230 > git --version # 'git version 2.39.2' 00:00:00.230 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.250 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.250 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.734 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.745 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.754 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.754 > git config core.sparsecheckout # timeout=10 00:00:05.764 > git read-tree -mu HEAD # timeout=10 00:00:05.779 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.798 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.798 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.877 [Pipeline] Start of Pipeline 00:00:05.891 [Pipeline] library 00:00:05.892 Loading library shm_lib@master 00:00:05.892 Library shm_lib@master is cached. Copying from home. 00:00:05.906 [Pipeline] node 00:00:05.918 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:05.919 [Pipeline] { 00:00:05.926 [Pipeline] catchError 00:00:05.927 [Pipeline] { 00:00:05.936 [Pipeline] wrap 00:00:05.942 [Pipeline] { 00:00:05.947 [Pipeline] stage 00:00:05.949 [Pipeline] { (Prologue) 00:00:05.961 [Pipeline] echo 00:00:05.962 Node: VM-host-SM0 00:00:05.967 [Pipeline] cleanWs 00:00:05.976 [WS-CLEANUP] Deleting project workspace... 00:00:05.976 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.981 [WS-CLEANUP] done 00:00:06.232 [Pipeline] setCustomBuildProperty 00:00:06.295 [Pipeline] httpRequest 00:00:06.778 [Pipeline] echo 00:00:06.779 Sorcerer 10.211.164.20 is alive 00:00:06.788 [Pipeline] retry 00:00:06.789 [Pipeline] { 00:00:06.801 [Pipeline] httpRequest 00:00:06.806 HttpMethod: GET 00:00:06.807 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.807 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.818 Response Code: HTTP/1.1 200 OK 00:00:06.819 Success: Status code 200 is in the accepted range: 200,404 00:00:06.819 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.138 [Pipeline] } 00:00:09.154 [Pipeline] // retry 00:00:09.161 [Pipeline] sh 00:00:09.442 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.457 [Pipeline] httpRequest 00:00:09.820 [Pipeline] echo 00:00:09.822 Sorcerer 10.211.164.20 is alive 00:00:09.830 [Pipeline] retry 00:00:09.832 [Pipeline] { 00:00:09.844 [Pipeline] httpRequest 00:00:09.849 HttpMethod: GET 00:00:09.850 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.850 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.868 Response Code: HTTP/1.1 200 OK 00:00:09.869 Success: Status code 200 is in the accepted range: 200,404 00:00:09.869 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:14.386 [Pipeline] } 00:01:14.404 [Pipeline] // retry 00:01:14.412 [Pipeline] sh 00:01:14.693 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:17.277 [Pipeline] sh 00:01:17.558 + git -C spdk log --oneline -n5 00:01:17.558 c13c99a5e test: Various fixes for Fedora40 00:01:17.558 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:17.558 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:17.558 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:17.558 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:17.580 [Pipeline] writeFile 00:01:17.595 [Pipeline] sh 00:01:17.878 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.890 [Pipeline] sh 00:01:18.171 + cat autorun-spdk.conf 00:01:18.171 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.171 SPDK_TEST_NVMF=1 00:01:18.171 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.171 SPDK_TEST_VFIOUSER=1 00:01:18.171 SPDK_TEST_USDT=1 00:01:18.171 SPDK_RUN_UBSAN=1 00:01:18.171 SPDK_TEST_NVMF_MDNS=1 00:01:18.171 NET_TYPE=virt 00:01:18.171 SPDK_JSONRPC_GO_CLIENT=1 00:01:18.171 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.178 RUN_NIGHTLY=1 00:01:18.181 [Pipeline] } 00:01:18.194 [Pipeline] // stage 00:01:18.210 [Pipeline] stage 00:01:18.212 [Pipeline] { (Run VM) 00:01:18.225 [Pipeline] sh 00:01:18.506 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.506 + echo 'Start stage prepare_nvme.sh' 00:01:18.506 Start stage prepare_nvme.sh 00:01:18.506 + [[ -n 7 ]] 00:01:18.506 + disk_prefix=ex7 00:01:18.506 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:18.506 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:18.506 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:18.506 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.506 ++ SPDK_TEST_NVMF=1 00:01:18.506 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.506 ++ SPDK_TEST_VFIOUSER=1 00:01:18.506 ++ SPDK_TEST_USDT=1 00:01:18.506 ++ SPDK_RUN_UBSAN=1 00:01:18.506 ++ SPDK_TEST_NVMF_MDNS=1 00:01:18.506 ++ NET_TYPE=virt 00:01:18.506 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:18.506 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.506 ++ RUN_NIGHTLY=1 00:01:18.506 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:18.506 + nvme_files=() 00:01:18.506 + declare -A nvme_files 00:01:18.506 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.506 + nvme_files['nvme.img']=5G 00:01:18.506 + nvme_files['nvme-cmb.img']=5G 00:01:18.506 + nvme_files['nvme-multi0.img']=4G 00:01:18.506 + nvme_files['nvme-multi1.img']=4G 00:01:18.506 + nvme_files['nvme-multi2.img']=4G 00:01:18.506 + nvme_files['nvme-openstack.img']=8G 00:01:18.506 + nvme_files['nvme-zns.img']=5G 00:01:18.506 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.506 + (( SPDK_TEST_FTL == 1 )) 00:01:18.506 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.506 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:18.506 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:18.506 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:18.506 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:18.506 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:18.506 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:18.506 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.506 + for nvme in "${!nvme_files[@]}" 00:01:18.506 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:18.765 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.765 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:18.765 + echo 'End stage prepare_nvme.sh' 00:01:18.765 End stage prepare_nvme.sh 00:01:18.775 [Pipeline] sh 00:01:19.053 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.053 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:01:19.053 00:01:19.054 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:19.054 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:19.054 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:19.054 HELP=0 00:01:19.054 DRY_RUN=0 00:01:19.054 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:19.054 NVME_DISKS_TYPE=nvme,nvme, 00:01:19.054 NVME_AUTO_CREATE=0 00:01:19.054 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:19.054 NVME_CMB=,, 00:01:19.054 NVME_PMR=,, 00:01:19.054 NVME_ZNS=,, 00:01:19.054 NVME_MS=,, 00:01:19.054 NVME_FDP=,, 00:01:19.054 SPDK_VAGRANT_DISTRO=fedora39 00:01:19.054 SPDK_VAGRANT_VMCPU=10 00:01:19.054 SPDK_VAGRANT_VMRAM=12288 00:01:19.054 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.054 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:19.054 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.054 SPDK_OPENSTACK_NETWORK=0 00:01:19.054 VAGRANT_PACKAGE_BOX=0 00:01:19.054 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:19.054 FORCE_DISTRO=true 00:01:19.054 VAGRANT_BOX_VERSION= 00:01:19.054 EXTRA_VAGRANTFILES= 00:01:19.054 NIC_MODEL=e1000 00:01:19.054 00:01:19.054 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:19.054 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.587 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.524 ==> default: Creating image (snapshot of base box volume). 00:01:22.524 ==> default: Creating domain with the following settings... 00:01:22.524 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732155482_47e51de04a563e8d46c6 00:01:22.524 ==> default: -- Domain type: kvm 00:01:22.524 ==> default: -- Cpus: 10 00:01:22.524 ==> default: -- Feature: acpi 00:01:22.524 ==> default: -- Feature: apic 00:01:22.524 ==> default: -- Feature: pae 00:01:22.524 ==> default: -- Memory: 12288M 00:01:22.524 ==> default: -- Memory Backing: hugepages: 00:01:22.524 ==> default: -- Management MAC: 00:01:22.524 ==> default: -- Loader: 00:01:22.524 ==> default: -- Nvram: 00:01:22.524 ==> default: -- Base box: spdk/fedora39 00:01:22.524 ==> default: -- Storage pool: default 00:01:22.524 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732155482_47e51de04a563e8d46c6.img (20G) 00:01:22.524 ==> default: -- Volume Cache: default 00:01:22.524 ==> default: -- Kernel: 00:01:22.524 ==> default: -- Initrd: 00:01:22.524 ==> default: -- Graphics Type: vnc 00:01:22.524 ==> default: -- Graphics Port: -1 00:01:22.524 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.524 ==> default: -- Graphics Password: Not defined 00:01:22.524 ==> default: -- Video Type: cirrus 00:01:22.524 ==> default: -- Video VRAM: 9216 00:01:22.524 ==> default: -- Sound Type: 00:01:22.524 ==> default: -- Keymap: en-us 00:01:22.524 ==> default: -- TPM Path: 00:01:22.524 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.524 ==> default: -- Command line args: 00:01:22.524 ==> default: -> value=-device, 00:01:22.524 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:22.524 ==> default: -> value=-drive, 00:01:22.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:22.524 ==> default: -> value=-device, 00:01:22.524 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.524 ==> default: -> value=-device, 00:01:22.524 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:22.524 ==> default: -> value=-drive, 00:01:22.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:22.524 ==> default: -> value=-device, 00:01:22.524 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.524 ==> default: -> value=-drive, 00:01:22.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:22.524 ==> default: -> value=-device, 00:01:22.524 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.524 ==> default: -> value=-drive, 00:01:22.524 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:22.524 ==> default: -> value=-device, 00:01:22.524 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.783 ==> default: Creating shared folders metadata... 00:01:22.783 ==> default: Starting domain. 00:01:24.685 ==> default: Waiting for domain to get an IP address... 00:01:42.775 ==> default: Waiting for SSH to become available... 00:01:42.776 ==> default: Configuring and enabling network interfaces... 00:01:46.073 default: SSH address: 192.168.121.239:22 00:01:46.073 default: SSH username: vagrant 00:01:46.073 default: SSH auth method: private key 00:01:47.983 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.099 ==> default: Mounting SSHFS shared folder... 00:01:58.009 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.010 ==> default: Checking Mount.. 00:01:59.386 ==> default: Folder Successfully Mounted! 00:01:59.386 ==> default: Running provisioner: file... 00:01:59.954 default: ~/.gitconfig => .gitconfig 00:02:00.522 00:02:00.522 SUCCESS! 00:02:00.522 00:02:00.522 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.522 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.522 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.522 00:02:00.532 [Pipeline] } 00:02:00.547 [Pipeline] // stage 00:02:00.558 [Pipeline] dir 00:02:00.559 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:00.561 [Pipeline] { 00:02:00.573 [Pipeline] catchError 00:02:00.575 [Pipeline] { 00:02:00.588 [Pipeline] sh 00:02:00.868 + vagrant ssh-config --host vagrant 00:02:00.868 + sed -ne /^Host/,$p 00:02:00.868 + tee ssh_conf 00:02:04.156 Host vagrant 00:02:04.156 HostName 192.168.121.239 00:02:04.156 User vagrant 00:02:04.156 Port 22 00:02:04.156 UserKnownHostsFile /dev/null 00:02:04.156 StrictHostKeyChecking no 00:02:04.156 PasswordAuthentication no 00:02:04.156 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:04.156 IdentitiesOnly yes 00:02:04.156 LogLevel FATAL 00:02:04.156 ForwardAgent yes 00:02:04.156 ForwardX11 yes 00:02:04.156 00:02:04.174 [Pipeline] withEnv 00:02:04.178 [Pipeline] { 00:02:04.195 [Pipeline] sh 00:02:04.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.478 source /etc/os-release 00:02:04.478 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.478 # Minimal, systemd-like check. 00:02:04.478 if [[ -e /.dockerenv ]]; then 00:02:04.478 # Clear garbage from the node's name: 00:02:04.478 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.478 # $HOSTNAME is the actual container id 00:02:04.478 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.478 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.478 # We can assume this is a mount from a host where container is running, 00:02:04.478 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.478 container="$(< /etc/hostname) ($agent)" 00:02:04.478 else 00:02:04.478 # Fallback 00:02:04.478 container=$agent 00:02:04.478 fi 00:02:04.478 fi 00:02:04.478 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.478 00:02:04.748 [Pipeline] } 00:02:04.765 [Pipeline] // withEnv 00:02:04.775 [Pipeline] setCustomBuildProperty 00:02:04.791 [Pipeline] stage 00:02:04.794 [Pipeline] { (Tests) 00:02:04.814 [Pipeline] sh 00:02:05.094 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.364 [Pipeline] sh 00:02:05.642 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.915 [Pipeline] timeout 00:02:05.916 Timeout set to expire in 1 hr 0 min 00:02:05.918 [Pipeline] { 00:02:05.934 [Pipeline] sh 00:02:06.236 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.816 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:06.828 [Pipeline] sh 00:02:07.107 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.378 [Pipeline] sh 00:02:07.655 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.929 [Pipeline] sh 00:02:08.207 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:08.465 ++ readlink -f spdk_repo 00:02:08.465 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.465 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.465 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.465 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.465 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.465 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.465 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.465 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:08.465 + cd /home/vagrant/spdk_repo 00:02:08.465 + source /etc/os-release 00:02:08.465 ++ NAME='Fedora Linux' 00:02:08.465 ++ VERSION='39 (Cloud Edition)' 00:02:08.465 ++ ID=fedora 00:02:08.465 ++ VERSION_ID=39 00:02:08.465 ++ VERSION_CODENAME= 00:02:08.465 ++ PLATFORM_ID=platform:f39 00:02:08.465 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.465 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.465 ++ LOGO=fedora-logo-icon 00:02:08.465 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.465 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.465 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.465 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.465 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.465 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.465 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.465 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.465 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.466 ++ SUPPORT_END=2024-11-12 00:02:08.466 ++ VARIANT='Cloud Edition' 00:02:08.466 ++ VARIANT_ID=cloud 00:02:08.466 + uname -a 00:02:08.466 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.466 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:08.466 Hugepages 00:02:08.466 node hugesize free / total 00:02:08.466 node0 1048576kB 0 / 0 00:02:08.466 node0 2048kB 0 / 0 00:02:08.466 00:02:08.466 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.466 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:08.466 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:08.466 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:08.724 + rm -f /tmp/spdk-ld-path 00:02:08.724 + source autorun-spdk.conf 00:02:08.724 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.724 ++ SPDK_TEST_NVMF=1 00:02:08.724 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.724 ++ SPDK_TEST_VFIOUSER=1 00:02:08.724 ++ SPDK_TEST_USDT=1 00:02:08.724 ++ SPDK_RUN_UBSAN=1 00:02:08.724 ++ SPDK_TEST_NVMF_MDNS=1 00:02:08.724 ++ NET_TYPE=virt 00:02:08.724 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:08.724 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.724 ++ RUN_NIGHTLY=1 00:02:08.724 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.724 + [[ -n '' ]] 00:02:08.724 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.724 + for M in /var/spdk/build-*-manifest.txt 00:02:08.724 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.724 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.724 + for M in /var/spdk/build-*-manifest.txt 00:02:08.724 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.724 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.724 + for M in /var/spdk/build-*-manifest.txt 00:02:08.724 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.724 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.724 ++ uname 00:02:08.724 + [[ Linux == \L\i\n\u\x ]] 00:02:08.724 + sudo dmesg -T 00:02:08.724 + sudo dmesg --clear 00:02:08.724 + dmesg_pid=5227 00:02:08.724 + sudo dmesg -Tw 00:02:08.724 + [[ Fedora Linux == FreeBSD ]] 00:02:08.724 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.724 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.724 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.724 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.724 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.724 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.724 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.724 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.724 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.724 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.724 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.724 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.724 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.724 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.724 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.724 Test configuration: 00:02:08.724 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.724 SPDK_TEST_NVMF=1 00:02:08.724 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.724 SPDK_TEST_VFIOUSER=1 00:02:08.724 SPDK_TEST_USDT=1 00:02:08.724 SPDK_RUN_UBSAN=1 00:02:08.724 SPDK_TEST_NVMF_MDNS=1 00:02:08.724 NET_TYPE=virt 00:02:08.724 SPDK_JSONRPC_GO_CLIENT=1 00:02:08.724 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.724 RUN_NIGHTLY=1 02:18:49 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:08.724 02:18:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.724 02:18:49 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.724 02:18:49 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.724 02:18:49 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.725 02:18:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.725 02:18:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.725 02:18:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.725 02:18:49 -- paths/export.sh@5 -- $ export PATH 00:02:08.725 02:18:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.725 02:18:49 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.725 02:18:49 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:08.725 02:18:49 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732155529.XXXXXX 00:02:08.725 02:18:49 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732155529.ud5oJ2 00:02:08.725 02:18:49 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:08.725 02:18:49 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:08.725 02:18:49 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.725 02:18:49 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.725 02:18:49 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.725 02:18:49 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:08.725 02:18:49 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:08.725 02:18:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.983 02:18:49 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:02:08.983 02:18:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.983 02:18:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.983 02:18:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.983 02:18:49 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.983 Thu Nov 21 02:18:49 AM UTC 2024 00:02:08.983 02:18:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.983 LTS-67-gc13c99a5e 00:02:08.983 02:18:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.983 02:18:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.983 02:18:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.983 02:18:49 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:08.983 02:18:49 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:08.983 02:18:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.983 ************************************ 00:02:08.983 START TEST ubsan 00:02:08.983 ************************************ 00:02:08.983 using ubsan 00:02:08.983 02:18:49 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:08.983 00:02:08.983 real 0m0.000s 00:02:08.983 user 0m0.000s 00:02:08.983 sys 0m0.000s 00:02:08.983 02:18:49 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:08.983 02:18:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.983 ************************************ 00:02:08.983 END TEST ubsan 00:02:08.983 ************************************ 00:02:08.983 02:18:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.983 02:18:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.983 02:18:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.983 02:18:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.983 02:18:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.983 02:18:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.983 02:18:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.983 02:18:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.983 02:18:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:02:09.271 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:09.271 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.530 Using 'verbs' RDMA provider 00:02:24.970 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:37.180 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:37.180 go version go1.21.1 linux/amd64 00:02:37.180 Creating mk/config.mk...done. 00:02:37.180 Creating mk/cc.flags.mk...done. 00:02:37.180 Type 'make' to build. 00:02:37.180 02:19:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:37.180 02:19:17 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:37.180 02:19:17 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:37.180 02:19:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.180 ************************************ 00:02:37.180 START TEST make 00:02:37.180 ************************************ 00:02:37.180 02:19:17 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:37.439 make[1]: Nothing to be done for 'all'. 00:02:38.818 The Meson build system 00:02:38.818 Version: 1.5.0 00:02:38.818 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:38.818 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.818 Build type: native build 00:02:38.818 Project name: libvfio-user 00:02:38.818 Project version: 0.0.1 00:02:38.818 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:38.818 C linker for the host machine: cc ld.bfd 2.40-14 00:02:38.818 Host machine cpu family: x86_64 00:02:38.818 Host machine cpu: x86_64 00:02:38.818 Run-time dependency threads found: YES 00:02:38.818 Library dl found: YES 00:02:38.818 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.818 Run-time dependency json-c found: YES 0.17 00:02:38.818 Run-time dependency cmocka found: YES 1.1.7 00:02:38.818 Program pytest-3 found: NO 00:02:38.818 Program flake8 found: NO 00:02:38.818 Program misspell-fixer found: NO 00:02:38.818 Program restructuredtext-lint found: NO 00:02:38.818 Program valgrind found: YES (/usr/bin/valgrind) 00:02:38.818 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.818 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.818 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.818 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:38.818 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:38.818 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:38.818 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:38.818 Build targets in project: 8 00:02:38.818 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:38.818 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:38.818 00:02:38.818 libvfio-user 0.0.1 00:02:38.818 00:02:38.818 User defined options 00:02:38.818 buildtype : debug 00:02:38.818 default_library: shared 00:02:38.818 libdir : /usr/local/lib 00:02:38.818 00:02:38.818 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.566 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:39.566 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:39.566 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:39.566 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:39.566 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:39.842 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:39.842 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:39.842 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:39.842 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:39.842 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:39.842 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:39.842 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:39.842 [12/37] Compiling C object samples/null.p/null.c.o 00:02:39.842 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:39.842 [14/37] Compiling C object samples/client.p/client.c.o 00:02:39.842 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:39.842 [16/37] Compiling C object samples/server.p/server.c.o 00:02:39.842 [17/37] Linking target samples/client 00:02:39.842 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:39.842 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:39.842 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:39.842 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:39.842 [22/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:39.842 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:40.101 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:40.101 [25/37] Linking target lib/libvfio-user.so.0.0.1 00:02:40.101 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:40.101 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:40.101 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:40.101 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:40.101 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:40.101 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:40.101 [32/37] Linking target test/unit_tests 00:02:40.101 [33/37] Linking target samples/server 00:02:40.101 [34/37] Linking target samples/gpio-pci-idio-16 00:02:40.101 [35/37] Linking target samples/null 00:02:40.101 [36/37] Linking target samples/lspci 00:02:40.101 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:40.101 INFO: autodetecting backend as ninja 00:02:40.101 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:40.360 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:40.620 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:40.620 ninja: no work to do. 00:02:48.747 The Meson build system 00:02:48.747 Version: 1.5.0 00:02:48.747 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:48.747 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:48.747 Build type: native build 00:02:48.747 Program cat found: YES (/usr/bin/cat) 00:02:48.747 Project name: DPDK 00:02:48.747 Project version: 23.11.0 00:02:48.747 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.747 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.747 Host machine cpu family: x86_64 00:02:48.747 Host machine cpu: x86_64 00:02:48.747 Message: ## Building in Developer Mode ## 00:02:48.747 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.747 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.747 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.747 Program python3 found: YES (/usr/bin/python3) 00:02:48.747 Program cat found: YES (/usr/bin/cat) 00:02:48.747 Compiler for C supports arguments -march=native: YES 00:02:48.747 Checking for size of "void *" : 8 00:02:48.747 Checking for size of "void *" : 8 (cached) 00:02:48.747 Library m found: YES 00:02:48.747 Library numa found: YES 00:02:48.747 Has header "numaif.h" : YES 00:02:48.747 Library fdt found: NO 00:02:48.747 Library execinfo found: NO 00:02:48.747 Has header "execinfo.h" : YES 00:02:48.747 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.747 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.747 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.747 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.747 Run-time dependency openssl found: YES 3.1.1 00:02:48.747 Run-time dependency libpcap found: YES 1.10.4 00:02:48.747 Has header "pcap.h" with dependency libpcap: YES 00:02:48.747 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.747 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.747 Compiler for C supports arguments -Wformat: YES 00:02:48.747 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.747 Compiler for C supports arguments -Wformat-security: NO 00:02:48.747 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.747 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.747 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.747 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.747 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.747 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.747 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.747 Compiler for C supports arguments -Wundef: YES 00:02:48.747 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.747 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.747 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.747 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.747 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.747 Program objdump found: YES (/usr/bin/objdump) 00:02:48.747 Compiler for C supports arguments -mavx512f: YES 00:02:48.747 Checking if "AVX512 checking" compiles: YES 00:02:48.747 Fetching value of define "__SSE4_2__" : 1 00:02:48.748 Fetching value of define "__AES__" : 1 00:02:48.748 Fetching value of define "__AVX__" : 1 00:02:48.748 Fetching value of define "__AVX2__" : 1 00:02:48.748 Fetching value of define "__AVX512BW__" : (undefined) 00:02:48.748 Fetching value of define "__AVX512CD__" : (undefined) 00:02:48.748 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:48.748 Fetching value of define "__AVX512F__" : (undefined) 00:02:48.748 Fetching value of define "__AVX512VL__" : (undefined) 00:02:48.748 Fetching value of define "__PCLMUL__" : 1 00:02:48.748 Fetching value of define "__RDRND__" : 1 00:02:48.748 Fetching value of define "__RDSEED__" : 1 00:02:48.748 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.748 Fetching value of define "__znver1__" : (undefined) 00:02:48.748 Fetching value of define "__znver2__" : (undefined) 00:02:48.748 Fetching value of define "__znver3__" : (undefined) 00:02:48.748 Fetching value of define "__znver4__" : (undefined) 00:02:48.748 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.748 Message: lib/log: Defining dependency "log" 00:02:48.748 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.748 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.748 Checking for function "getentropy" : NO 00:02:48.748 Message: lib/eal: Defining dependency "eal" 00:02:48.748 Message: lib/ring: Defining dependency "ring" 00:02:48.748 Message: lib/rcu: Defining dependency "rcu" 00:02:48.748 Message: lib/mempool: Defining dependency "mempool" 00:02:48.748 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.748 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.748 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:48.748 Compiler for C supports arguments -mpclmul: YES 00:02:48.748 Compiler for C supports arguments -maes: YES 00:02:48.748 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.748 Compiler for C supports arguments -mavx512bw: YES 00:02:48.748 Compiler for C supports arguments -mavx512dq: YES 00:02:48.748 Compiler for C supports arguments -mavx512vl: YES 00:02:48.748 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.748 Compiler for C supports arguments -mavx2: YES 00:02:48.748 Compiler for C supports arguments -mavx: YES 00:02:48.748 Message: lib/net: Defining dependency "net" 00:02:48.748 Message: lib/meter: Defining dependency "meter" 00:02:48.748 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.748 Message: lib/pci: Defining dependency "pci" 00:02:48.748 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.748 Message: lib/hash: Defining dependency "hash" 00:02:48.748 Message: lib/timer: Defining dependency "timer" 00:02:48.748 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.748 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.748 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.748 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.748 Message: lib/power: Defining dependency "power" 00:02:48.748 Message: lib/reorder: Defining dependency "reorder" 00:02:48.748 Message: lib/security: Defining dependency "security" 00:02:48.748 Has header "linux/userfaultfd.h" : YES 00:02:48.748 Has header "linux/vduse.h" : YES 00:02:48.748 Message: lib/vhost: Defining dependency "vhost" 00:02:48.748 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.748 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.748 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.748 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.748 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.748 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.748 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.748 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.748 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.748 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.748 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:48.748 Configuring doxy-api-html.conf using configuration 00:02:48.748 Configuring doxy-api-man.conf using configuration 00:02:48.748 Program mandb found: YES (/usr/bin/mandb) 00:02:48.748 Program sphinx-build found: NO 00:02:48.748 Configuring rte_build_config.h using configuration 00:02:48.748 Message: 00:02:48.748 ================= 00:02:48.748 Applications Enabled 00:02:48.748 ================= 00:02:48.748 00:02:48.748 apps: 00:02:48.748 00:02:48.748 00:02:48.748 Message: 00:02:48.748 ================= 00:02:48.748 Libraries Enabled 00:02:48.748 ================= 00:02:48.748 00:02:48.748 libs: 00:02:48.748 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.748 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.748 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.748 00:02:48.748 Message: 00:02:48.748 =============== 00:02:48.748 Drivers Enabled 00:02:48.748 =============== 00:02:48.748 00:02:48.748 common: 00:02:48.748 00:02:48.748 bus: 00:02:48.748 pci, vdev, 00:02:48.748 mempool: 00:02:48.748 ring, 00:02:48.748 dma: 00:02:48.748 00:02:48.748 net: 00:02:48.748 00:02:48.748 crypto: 00:02:48.748 00:02:48.748 compress: 00:02:48.748 00:02:48.748 vdpa: 00:02:48.748 00:02:48.748 00:02:48.748 Message: 00:02:48.748 ================= 00:02:48.748 Content Skipped 00:02:48.748 ================= 00:02:48.748 00:02:48.748 apps: 00:02:48.748 dumpcap: explicitly disabled via build config 00:02:48.748 graph: explicitly disabled via build config 00:02:48.748 pdump: explicitly disabled via build config 00:02:48.748 proc-info: explicitly disabled via build config 00:02:48.748 test-acl: explicitly disabled via build config 00:02:48.748 test-bbdev: explicitly disabled via build config 00:02:48.748 test-cmdline: explicitly disabled via build config 00:02:48.748 test-compress-perf: explicitly disabled via build config 00:02:48.748 test-crypto-perf: explicitly disabled via build config 00:02:48.748 test-dma-perf: explicitly disabled via build config 00:02:48.748 test-eventdev: explicitly disabled via build config 00:02:48.748 test-fib: explicitly disabled via build config 00:02:48.748 test-flow-perf: explicitly disabled via build config 00:02:48.748 test-gpudev: explicitly disabled via build config 00:02:48.748 test-mldev: explicitly disabled via build config 00:02:48.748 test-pipeline: explicitly disabled via build config 00:02:48.748 test-pmd: explicitly disabled via build config 00:02:48.748 test-regex: explicitly disabled via build config 00:02:48.748 test-sad: explicitly disabled via build config 00:02:48.748 test-security-perf: explicitly disabled via build config 00:02:48.748 00:02:48.748 libs: 00:02:48.748 metrics: explicitly disabled via build config 00:02:48.748 acl: explicitly disabled via build config 00:02:48.748 bbdev: explicitly disabled via build config 00:02:48.748 bitratestats: explicitly disabled via build config 00:02:48.748 bpf: explicitly disabled via build config 00:02:48.748 cfgfile: explicitly disabled via build config 00:02:48.748 distributor: explicitly disabled via build config 00:02:48.748 efd: explicitly disabled via build config 00:02:48.748 eventdev: explicitly disabled via build config 00:02:48.748 dispatcher: explicitly disabled via build config 00:02:48.748 gpudev: explicitly disabled via build config 00:02:48.748 gro: explicitly disabled via build config 00:02:48.748 gso: explicitly disabled via build config 00:02:48.748 ip_frag: explicitly disabled via build config 00:02:48.748 jobstats: explicitly disabled via build config 00:02:48.748 latencystats: explicitly disabled via build config 00:02:48.748 lpm: explicitly disabled via build config 00:02:48.748 member: explicitly disabled via build config 00:02:48.748 pcapng: explicitly disabled via build config 00:02:48.748 rawdev: explicitly disabled via build config 00:02:48.748 regexdev: explicitly disabled via build config 00:02:48.748 mldev: explicitly disabled via build config 00:02:48.748 rib: explicitly disabled via build config 00:02:48.748 sched: explicitly disabled via build config 00:02:48.748 stack: explicitly disabled via build config 00:02:48.748 ipsec: explicitly disabled via build config 00:02:48.748 pdcp: explicitly disabled via build config 00:02:48.748 fib: explicitly disabled via build config 00:02:48.748 port: explicitly disabled via build config 00:02:48.748 pdump: explicitly disabled via build config 00:02:48.748 table: explicitly disabled via build config 00:02:48.748 pipeline: explicitly disabled via build config 00:02:48.748 graph: explicitly disabled via build config 00:02:48.748 node: explicitly disabled via build config 00:02:48.748 00:02:48.748 drivers: 00:02:48.748 common/cpt: not in enabled drivers build config 00:02:48.748 common/dpaax: not in enabled drivers build config 00:02:48.748 common/iavf: not in enabled drivers build config 00:02:48.748 common/idpf: not in enabled drivers build config 00:02:48.748 common/mvep: not in enabled drivers build config 00:02:48.748 common/octeontx: not in enabled drivers build config 00:02:48.748 bus/auxiliary: not in enabled drivers build config 00:02:48.748 bus/cdx: not in enabled drivers build config 00:02:48.748 bus/dpaa: not in enabled drivers build config 00:02:48.748 bus/fslmc: not in enabled drivers build config 00:02:48.748 bus/ifpga: not in enabled drivers build config 00:02:48.748 bus/platform: not in enabled drivers build config 00:02:48.748 bus/vmbus: not in enabled drivers build config 00:02:48.748 common/cnxk: not in enabled drivers build config 00:02:48.748 common/mlx5: not in enabled drivers build config 00:02:48.748 common/nfp: not in enabled drivers build config 00:02:48.748 common/qat: not in enabled drivers build config 00:02:48.748 common/sfc_efx: not in enabled drivers build config 00:02:48.748 mempool/bucket: not in enabled drivers build config 00:02:48.748 mempool/cnxk: not in enabled drivers build config 00:02:48.748 mempool/dpaa: not in enabled drivers build config 00:02:48.748 mempool/dpaa2: not in enabled drivers build config 00:02:48.748 mempool/octeontx: not in enabled drivers build config 00:02:48.748 mempool/stack: not in enabled drivers build config 00:02:48.748 dma/cnxk: not in enabled drivers build config 00:02:48.748 dma/dpaa: not in enabled drivers build config 00:02:48.748 dma/dpaa2: not in enabled drivers build config 00:02:48.748 dma/hisilicon: not in enabled drivers build config 00:02:48.748 dma/idxd: not in enabled drivers build config 00:02:48.749 dma/ioat: not in enabled drivers build config 00:02:48.749 dma/skeleton: not in enabled drivers build config 00:02:48.749 net/af_packet: not in enabled drivers build config 00:02:48.749 net/af_xdp: not in enabled drivers build config 00:02:48.749 net/ark: not in enabled drivers build config 00:02:48.749 net/atlantic: not in enabled drivers build config 00:02:48.749 net/avp: not in enabled drivers build config 00:02:48.749 net/axgbe: not in enabled drivers build config 00:02:48.749 net/bnx2x: not in enabled drivers build config 00:02:48.749 net/bnxt: not in enabled drivers build config 00:02:48.749 net/bonding: not in enabled drivers build config 00:02:48.749 net/cnxk: not in enabled drivers build config 00:02:48.749 net/cpfl: not in enabled drivers build config 00:02:48.749 net/cxgbe: not in enabled drivers build config 00:02:48.749 net/dpaa: not in enabled drivers build config 00:02:48.749 net/dpaa2: not in enabled drivers build config 00:02:48.749 net/e1000: not in enabled drivers build config 00:02:48.749 net/ena: not in enabled drivers build config 00:02:48.749 net/enetc: not in enabled drivers build config 00:02:48.749 net/enetfec: not in enabled drivers build config 00:02:48.749 net/enic: not in enabled drivers build config 00:02:48.749 net/failsafe: not in enabled drivers build config 00:02:48.749 net/fm10k: not in enabled drivers build config 00:02:48.749 net/gve: not in enabled drivers build config 00:02:48.749 net/hinic: not in enabled drivers build config 00:02:48.749 net/hns3: not in enabled drivers build config 00:02:48.749 net/i40e: not in enabled drivers build config 00:02:48.749 net/iavf: not in enabled drivers build config 00:02:48.749 net/ice: not in enabled drivers build config 00:02:48.749 net/idpf: not in enabled drivers build config 00:02:48.749 net/igc: not in enabled drivers build config 00:02:48.749 net/ionic: not in enabled drivers build config 00:02:48.749 net/ipn3ke: not in enabled drivers build config 00:02:48.749 net/ixgbe: not in enabled drivers build config 00:02:48.749 net/mana: not in enabled drivers build config 00:02:48.749 net/memif: not in enabled drivers build config 00:02:48.749 net/mlx4: not in enabled drivers build config 00:02:48.749 net/mlx5: not in enabled drivers build config 00:02:48.749 net/mvneta: not in enabled drivers build config 00:02:48.749 net/mvpp2: not in enabled drivers build config 00:02:48.749 net/netvsc: not in enabled drivers build config 00:02:48.749 net/nfb: not in enabled drivers build config 00:02:48.749 net/nfp: not in enabled drivers build config 00:02:48.749 net/ngbe: not in enabled drivers build config 00:02:48.749 net/null: not in enabled drivers build config 00:02:48.749 net/octeontx: not in enabled drivers build config 00:02:48.749 net/octeon_ep: not in enabled drivers build config 00:02:48.749 net/pcap: not in enabled drivers build config 00:02:48.749 net/pfe: not in enabled drivers build config 00:02:48.749 net/qede: not in enabled drivers build config 00:02:48.749 net/ring: not in enabled drivers build config 00:02:48.749 net/sfc: not in enabled drivers build config 00:02:48.749 net/softnic: not in enabled drivers build config 00:02:48.749 net/tap: not in enabled drivers build config 00:02:48.749 net/thunderx: not in enabled drivers build config 00:02:48.749 net/txgbe: not in enabled drivers build config 00:02:48.749 net/vdev_netvsc: not in enabled drivers build config 00:02:48.749 net/vhost: not in enabled drivers build config 00:02:48.749 net/virtio: not in enabled drivers build config 00:02:48.749 net/vmxnet3: not in enabled drivers build config 00:02:48.749 raw/*: missing internal dependency, "rawdev" 00:02:48.749 crypto/armv8: not in enabled drivers build config 00:02:48.749 crypto/bcmfs: not in enabled drivers build config 00:02:48.749 crypto/caam_jr: not in enabled drivers build config 00:02:48.749 crypto/ccp: not in enabled drivers build config 00:02:48.749 crypto/cnxk: not in enabled drivers build config 00:02:48.749 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.749 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.749 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.749 crypto/mlx5: not in enabled drivers build config 00:02:48.749 crypto/mvsam: not in enabled drivers build config 00:02:48.749 crypto/nitrox: not in enabled drivers build config 00:02:48.749 crypto/null: not in enabled drivers build config 00:02:48.749 crypto/octeontx: not in enabled drivers build config 00:02:48.749 crypto/openssl: not in enabled drivers build config 00:02:48.749 crypto/scheduler: not in enabled drivers build config 00:02:48.749 crypto/uadk: not in enabled drivers build config 00:02:48.749 crypto/virtio: not in enabled drivers build config 00:02:48.749 compress/isal: not in enabled drivers build config 00:02:48.749 compress/mlx5: not in enabled drivers build config 00:02:48.749 compress/octeontx: not in enabled drivers build config 00:02:48.749 compress/zlib: not in enabled drivers build config 00:02:48.749 regex/*: missing internal dependency, "regexdev" 00:02:48.749 ml/*: missing internal dependency, "mldev" 00:02:48.749 vdpa/ifc: not in enabled drivers build config 00:02:48.749 vdpa/mlx5: not in enabled drivers build config 00:02:48.749 vdpa/nfp: not in enabled drivers build config 00:02:48.749 vdpa/sfc: not in enabled drivers build config 00:02:48.749 event/*: missing internal dependency, "eventdev" 00:02:48.749 baseband/*: missing internal dependency, "bbdev" 00:02:48.749 gpu/*: missing internal dependency, "gpudev" 00:02:48.749 00:02:48.749 00:02:49.323 Build targets in project: 85 00:02:49.323 00:02:49.323 DPDK 23.11.0 00:02:49.323 00:02:49.323 User defined options 00:02:49.323 buildtype : debug 00:02:49.323 default_library : shared 00:02:49.323 libdir : lib 00:02:49.323 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:49.323 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:49.323 c_link_args : 00:02:49.323 cpu_instruction_set: native 00:02:49.323 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:49.324 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:49.324 enable_docs : false 00:02:49.324 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:49.324 enable_kmods : false 00:02:49.324 tests : false 00:02:49.324 00:02:49.324 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.891 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:49.891 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.891 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:49.891 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.892 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:49.892 [5/265] Linking static target lib/librte_kvargs.a 00:02:49.892 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.892 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.892 [8/265] Linking static target lib/librte_log.a 00:02:49.892 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.150 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.410 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.668 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.668 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.669 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.928 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.928 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.928 [17/265] Linking static target lib/librte_telemetry.a 00:02:50.928 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:50.928 [19/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.928 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:50.928 [21/265] Linking target lib/librte_log.so.24.0 00:02:50.928 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.187 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.187 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:51.446 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:51.446 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.705 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:51.705 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.705 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.705 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.705 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.705 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.705 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:51.964 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.964 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.964 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:51.964 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:51.964 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.223 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.223 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.223 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.223 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.482 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.482 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:52.482 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.742 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.001 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.001 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.001 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.001 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.001 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.260 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.260 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.260 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:53.260 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.518 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.518 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:53.518 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.777 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.777 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:53.777 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.036 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.036 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.036 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.036 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.036 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.036 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:54.295 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.554 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.554 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:54.554 [71/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.812 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.812 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.812 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:54.812 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.812 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.812 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:55.071 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.071 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.071 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.071 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.329 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.588 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.588 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.588 [85/265] Linking static target lib/librte_ring.a 00:02:55.588 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:55.848 [87/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.848 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:55.848 [89/265] Linking static target lib/librte_eal.a 00:02:55.848 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.107 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.107 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.107 [93/265] Linking static target lib/librte_rcu.a 00:02:56.107 [94/265] Linking static target lib/librte_mempool.a 00:02:56.366 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.366 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.366 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.625 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:56.625 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:56.625 [100/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.625 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.625 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.625 [103/265] Linking static target lib/librte_mbuf.a 00:02:56.884 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.143 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.402 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.402 [107/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.402 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.402 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.402 [110/265] Linking static target lib/librte_net.a 00:02:57.662 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.662 [112/265] Linking static target lib/librte_meter.a 00:02:57.921 [113/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.921 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.921 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.921 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.180 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.180 [118/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.439 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.699 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.699 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.699 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.957 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.957 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.957 [125/265] Linking static target lib/librte_pci.a 00:02:59.216 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.216 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.216 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.216 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.216 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.216 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.475 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.475 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.475 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.475 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.475 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.475 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.475 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.475 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.475 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.475 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.475 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.475 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.734 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.734 [145/265] Linking static target lib/librte_ethdev.a 00:02:59.992 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.992 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.992 [148/265] Linking static target lib/librte_cmdline.a 00:03:00.251 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.251 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.251 [151/265] Linking static target lib/librte_timer.a 00:03:00.251 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.251 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.509 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.509 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.509 [156/265] Linking static target lib/librte_hash.a 00:03:00.768 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.768 [158/265] Linking static target lib/librte_compressdev.a 00:03:00.768 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:01.026 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.026 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.026 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.026 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:01.285 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.285 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:01.285 [166/265] Linking static target lib/librte_dmadev.a 00:03:01.551 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.551 [168/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.551 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.818 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.818 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.818 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.818 [173/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.818 [174/265] Linking static target lib/librte_cryptodev.a 00:03:01.818 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.077 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.336 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.336 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.336 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.336 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.595 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.595 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.595 [183/265] Linking static target lib/librte_power.a 00:03:02.595 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.595 [185/265] Linking static target lib/librte_reorder.a 00:03:02.855 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.855 [187/265] Linking static target lib/librte_security.a 00:03:03.114 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.114 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.114 [190/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.114 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.373 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.941 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.941 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.941 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.941 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.941 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.200 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.460 [199/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.460 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.720 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.720 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.720 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.720 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.720 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.720 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.979 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:04.979 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.979 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.979 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.979 [211/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.979 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.238 [213/265] Linking static target drivers/librte_bus_pci.a 00:03:05.238 [214/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.238 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.238 [216/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.238 [217/265] Linking static target drivers/librte_bus_vdev.a 00:03:05.238 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.238 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.497 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.497 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.497 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.497 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:05.497 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.498 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.066 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.066 [227/265] Linking static target lib/librte_vhost.a 00:03:07.445 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.445 [229/265] Linking target lib/librte_eal.so.24.0 00:03:07.445 [230/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.445 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:07.445 [232/265] Linking target lib/librte_pci.so.24.0 00:03:07.445 [233/265] Linking target lib/librte_ring.so.24.0 00:03:07.445 [234/265] Linking target lib/librte_meter.so.24.0 00:03:07.445 [235/265] Linking target lib/librte_timer.so.24.0 00:03:07.445 [236/265] Linking target lib/librte_dmadev.so.24.0 00:03:07.445 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:07.704 [238/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.704 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:07.704 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:07.705 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:07.705 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:07.705 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:07.705 [244/265] Linking target lib/librte_rcu.so.24.0 00:03:07.705 [245/265] Linking target lib/librte_mempool.so.24.0 00:03:07.705 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:07.705 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:07.705 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:07.963 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:07.963 [250/265] Linking target lib/librte_mbuf.so.24.0 00:03:07.963 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:08.223 [252/265] Linking target lib/librte_net.so.24.0 00:03:08.223 [253/265] Linking target lib/librte_reorder.so.24.0 00:03:08.223 [254/265] Linking target lib/librte_compressdev.so.24.0 00:03:08.223 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:03:08.223 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:08.223 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:08.223 [258/265] Linking target lib/librte_hash.so.24.0 00:03:08.223 [259/265] Linking target lib/librte_cmdline.so.24.0 00:03:08.223 [260/265] Linking target lib/librte_ethdev.so.24.0 00:03:08.223 [261/265] Linking target lib/librte_security.so.24.0 00:03:08.482 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:08.482 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:08.482 [264/265] Linking target lib/librte_power.so.24.0 00:03:08.482 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:08.482 INFO: autodetecting backend as ninja 00:03:08.482 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:09.858 CC lib/ut_mock/mock.o 00:03:09.858 CC lib/log/log_flags.o 00:03:09.858 CC lib/log/log.o 00:03:09.858 CC lib/log/log_deprecated.o 00:03:09.858 CC lib/ut/ut.o 00:03:09.858 LIB libspdk_ut_mock.a 00:03:09.858 LIB libspdk_log.a 00:03:09.859 SO libspdk_ut_mock.so.5.0 00:03:09.859 LIB libspdk_ut.a 00:03:09.859 SO libspdk_log.so.6.1 00:03:09.859 SYMLINK libspdk_ut_mock.so 00:03:09.859 SO libspdk_ut.so.1.0 00:03:10.118 SYMLINK libspdk_log.so 00:03:10.118 SYMLINK libspdk_ut.so 00:03:10.118 CC lib/util/base64.o 00:03:10.118 CC lib/util/bit_array.o 00:03:10.118 CXX lib/trace_parser/trace.o 00:03:10.118 CC lib/util/cpuset.o 00:03:10.118 CC lib/util/crc16.o 00:03:10.118 CC lib/dma/dma.o 00:03:10.118 CC lib/util/crc32.o 00:03:10.118 CC lib/ioat/ioat.o 00:03:10.118 CC lib/util/crc32c.o 00:03:10.118 CC lib/vfio_user/host/vfio_user_pci.o 00:03:10.378 CC lib/util/crc32_ieee.o 00:03:10.378 CC lib/vfio_user/host/vfio_user.o 00:03:10.378 CC lib/util/crc64.o 00:03:10.378 CC lib/util/dif.o 00:03:10.378 LIB libspdk_dma.a 00:03:10.378 CC lib/util/fd.o 00:03:10.378 CC lib/util/file.o 00:03:10.378 SO libspdk_dma.so.3.0 00:03:10.378 CC lib/util/hexlify.o 00:03:10.378 LIB libspdk_ioat.a 00:03:10.378 SO libspdk_ioat.so.6.0 00:03:10.378 SYMLINK libspdk_dma.so 00:03:10.378 CC lib/util/iov.o 00:03:10.378 CC lib/util/math.o 00:03:10.637 SYMLINK libspdk_ioat.so 00:03:10.637 CC lib/util/pipe.o 00:03:10.637 CC lib/util/strerror_tls.o 00:03:10.637 CC lib/util/string.o 00:03:10.637 CC lib/util/uuid.o 00:03:10.637 LIB libspdk_vfio_user.a 00:03:10.637 SO libspdk_vfio_user.so.4.0 00:03:10.637 CC lib/util/fd_group.o 00:03:10.637 CC lib/util/xor.o 00:03:10.637 SYMLINK libspdk_vfio_user.so 00:03:10.637 CC lib/util/zipf.o 00:03:10.896 LIB libspdk_util.a 00:03:10.896 SO libspdk_util.so.8.0 00:03:11.154 SYMLINK libspdk_util.so 00:03:11.154 LIB libspdk_trace_parser.a 00:03:11.154 SO libspdk_trace_parser.so.4.0 00:03:11.154 CC lib/rdma/common.o 00:03:11.154 CC lib/rdma/rdma_verbs.o 00:03:11.154 CC lib/json/json_parse.o 00:03:11.154 CC lib/idxd/idxd.o 00:03:11.154 CC lib/json/json_util.o 00:03:11.154 CC lib/idxd/idxd_user.o 00:03:11.154 CC lib/conf/conf.o 00:03:11.154 CC lib/vmd/vmd.o 00:03:11.154 CC lib/env_dpdk/env.o 00:03:11.413 SYMLINK libspdk_trace_parser.so 00:03:11.413 CC lib/vmd/led.o 00:03:11.413 CC lib/json/json_write.o 00:03:11.413 LIB libspdk_conf.a 00:03:11.413 CC lib/idxd/idxd_kernel.o 00:03:11.413 CC lib/env_dpdk/memory.o 00:03:11.413 CC lib/env_dpdk/pci.o 00:03:11.413 CC lib/env_dpdk/init.o 00:03:11.413 SO libspdk_conf.so.5.0 00:03:11.413 LIB libspdk_rdma.a 00:03:11.672 SO libspdk_rdma.so.5.0 00:03:11.672 SYMLINK libspdk_conf.so 00:03:11.672 CC lib/env_dpdk/threads.o 00:03:11.672 SYMLINK libspdk_rdma.so 00:03:11.672 CC lib/env_dpdk/pci_ioat.o 00:03:11.672 CC lib/env_dpdk/pci_virtio.o 00:03:11.672 CC lib/env_dpdk/pci_vmd.o 00:03:11.672 LIB libspdk_json.a 00:03:11.672 CC lib/env_dpdk/pci_idxd.o 00:03:11.672 CC lib/env_dpdk/pci_event.o 00:03:11.672 SO libspdk_json.so.5.1 00:03:11.672 LIB libspdk_idxd.a 00:03:11.931 SO libspdk_idxd.so.11.0 00:03:11.931 CC lib/env_dpdk/sigbus_handler.o 00:03:11.931 CC lib/env_dpdk/pci_dpdk.o 00:03:11.931 SYMLINK libspdk_json.so 00:03:11.931 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.931 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.931 LIB libspdk_vmd.a 00:03:11.931 SYMLINK libspdk_idxd.so 00:03:11.931 SO libspdk_vmd.so.5.0 00:03:11.931 SYMLINK libspdk_vmd.so 00:03:11.931 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.931 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.931 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.931 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:12.190 LIB libspdk_jsonrpc.a 00:03:12.449 SO libspdk_jsonrpc.so.5.1 00:03:12.449 SYMLINK libspdk_jsonrpc.so 00:03:12.449 CC lib/rpc/rpc.o 00:03:12.708 LIB libspdk_env_dpdk.a 00:03:12.708 LIB libspdk_rpc.a 00:03:12.708 SO libspdk_env_dpdk.so.13.0 00:03:12.708 SO libspdk_rpc.so.5.0 00:03:12.708 SYMLINK libspdk_rpc.so 00:03:12.967 SYMLINK libspdk_env_dpdk.so 00:03:12.967 CC lib/notify/notify_rpc.o 00:03:12.967 CC lib/notify/notify.o 00:03:12.967 CC lib/trace/trace_flags.o 00:03:12.967 CC lib/trace/trace.o 00:03:12.967 CC lib/trace/trace_rpc.o 00:03:12.967 CC lib/sock/sock.o 00:03:12.967 CC lib/sock/sock_rpc.o 00:03:13.227 LIB libspdk_notify.a 00:03:13.227 SO libspdk_notify.so.5.0 00:03:13.227 LIB libspdk_trace.a 00:03:13.227 SO libspdk_trace.so.9.0 00:03:13.227 SYMLINK libspdk_notify.so 00:03:13.227 SYMLINK libspdk_trace.so 00:03:13.227 LIB libspdk_sock.a 00:03:13.485 SO libspdk_sock.so.8.0 00:03:13.485 SYMLINK libspdk_sock.so 00:03:13.485 CC lib/thread/thread.o 00:03:13.485 CC lib/thread/iobuf.o 00:03:13.743 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.743 CC lib/nvme/nvme_ctrlr.o 00:03:13.743 CC lib/nvme/nvme_fabric.o 00:03:13.743 CC lib/nvme/nvme_ns_cmd.o 00:03:13.743 CC lib/nvme/nvme_ns.o 00:03:13.743 CC lib/nvme/nvme_qpair.o 00:03:13.743 CC lib/nvme/nvme_pcie.o 00:03:13.743 CC lib/nvme/nvme_pcie_common.o 00:03:14.001 CC lib/nvme/nvme.o 00:03:14.258 CC lib/nvme/nvme_quirks.o 00:03:14.517 CC lib/nvme/nvme_transport.o 00:03:14.517 CC lib/nvme/nvme_discovery.o 00:03:14.517 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.517 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.517 CC lib/nvme/nvme_tcp.o 00:03:14.775 CC lib/nvme/nvme_opal.o 00:03:14.775 CC lib/nvme/nvme_io_msg.o 00:03:15.051 CC lib/nvme/nvme_poll_group.o 00:03:15.051 LIB libspdk_thread.a 00:03:15.051 CC lib/nvme/nvme_zns.o 00:03:15.051 CC lib/nvme/nvme_cuse.o 00:03:15.051 SO libspdk_thread.so.9.0 00:03:15.319 CC lib/nvme/nvme_vfio_user.o 00:03:15.319 CC lib/nvme/nvme_rdma.o 00:03:15.319 SYMLINK libspdk_thread.so 00:03:15.319 CC lib/accel/accel.o 00:03:15.319 CC lib/blob/blobstore.o 00:03:15.578 CC lib/init/json_config.o 00:03:15.578 CC lib/blob/request.o 00:03:15.578 CC lib/init/subsystem.o 00:03:15.837 CC lib/init/subsystem_rpc.o 00:03:15.837 CC lib/init/rpc.o 00:03:15.837 CC lib/accel/accel_rpc.o 00:03:15.837 CC lib/blob/zeroes.o 00:03:15.837 LIB libspdk_init.a 00:03:15.837 CC lib/blob/blob_bs_dev.o 00:03:15.837 SO libspdk_init.so.4.0 00:03:15.837 CC lib/virtio/virtio.o 00:03:16.096 CC lib/virtio/virtio_vhost_user.o 00:03:16.096 SYMLINK libspdk_init.so 00:03:16.096 CC lib/virtio/virtio_vfio_user.o 00:03:16.096 CC lib/virtio/virtio_pci.o 00:03:16.096 CC lib/accel/accel_sw.o 00:03:16.354 CC lib/vfu_tgt/tgt_endpoint.o 00:03:16.354 CC lib/event/app.o 00:03:16.354 CC lib/vfu_tgt/tgt_rpc.o 00:03:16.354 CC lib/event/reactor.o 00:03:16.354 CC lib/event/log_rpc.o 00:03:16.354 LIB libspdk_virtio.a 00:03:16.354 CC lib/event/app_rpc.o 00:03:16.354 LIB libspdk_accel.a 00:03:16.354 SO libspdk_virtio.so.6.0 00:03:16.354 SO libspdk_accel.so.14.0 00:03:16.354 CC lib/event/scheduler_static.o 00:03:16.354 SYMLINK libspdk_virtio.so 00:03:16.612 SYMLINK libspdk_accel.so 00:03:16.613 LIB libspdk_nvme.a 00:03:16.613 LIB libspdk_vfu_tgt.a 00:03:16.613 SO libspdk_vfu_tgt.so.2.0 00:03:16.613 CC lib/bdev/bdev.o 00:03:16.613 CC lib/bdev/bdev_rpc.o 00:03:16.613 CC lib/bdev/bdev_zone.o 00:03:16.613 CC lib/bdev/scsi_nvme.o 00:03:16.613 CC lib/bdev/part.o 00:03:16.613 SYMLINK libspdk_vfu_tgt.so 00:03:16.613 LIB libspdk_event.a 00:03:16.872 SO libspdk_event.so.12.0 00:03:16.872 SO libspdk_nvme.so.12.0 00:03:16.872 SYMLINK libspdk_event.so 00:03:16.872 SYMLINK libspdk_nvme.so 00:03:18.249 LIB libspdk_blob.a 00:03:18.249 SO libspdk_blob.so.10.1 00:03:18.249 SYMLINK libspdk_blob.so 00:03:18.508 CC lib/blobfs/blobfs.o 00:03:18.508 CC lib/blobfs/tree.o 00:03:18.508 CC lib/lvol/lvol.o 00:03:19.453 LIB libspdk_bdev.a 00:03:19.453 LIB libspdk_blobfs.a 00:03:19.453 SO libspdk_bdev.so.14.0 00:03:19.453 LIB libspdk_lvol.a 00:03:19.453 SO libspdk_blobfs.so.9.0 00:03:19.453 SO libspdk_lvol.so.9.1 00:03:19.453 SYMLINK libspdk_bdev.so 00:03:19.453 SYMLINK libspdk_blobfs.so 00:03:19.453 SYMLINK libspdk_lvol.so 00:03:19.453 CC lib/ublk/ublk.o 00:03:19.453 CC lib/ublk/ublk_rpc.o 00:03:19.453 CC lib/ftl/ftl_core.o 00:03:19.453 CC lib/ftl/ftl_init.o 00:03:19.453 CC lib/ftl/ftl_debug.o 00:03:19.453 CC lib/ftl/ftl_layout.o 00:03:19.453 CC lib/ftl/ftl_io.o 00:03:19.712 CC lib/nbd/nbd.o 00:03:19.712 CC lib/scsi/dev.o 00:03:19.712 CC lib/nvmf/ctrlr.o 00:03:19.712 CC lib/scsi/lun.o 00:03:19.712 CC lib/ftl/ftl_sb.o 00:03:19.712 CC lib/nbd/nbd_rpc.o 00:03:19.972 CC lib/scsi/port.o 00:03:19.972 CC lib/ftl/ftl_l2p.o 00:03:19.972 CC lib/scsi/scsi.o 00:03:19.972 CC lib/scsi/scsi_bdev.o 00:03:19.972 CC lib/scsi/scsi_pr.o 00:03:19.972 CC lib/scsi/scsi_rpc.o 00:03:19.972 LIB libspdk_nbd.a 00:03:19.972 CC lib/scsi/task.o 00:03:19.972 SO libspdk_nbd.so.6.0 00:03:19.972 CC lib/ftl/ftl_l2p_flat.o 00:03:20.231 CC lib/ftl/ftl_nv_cache.o 00:03:20.231 SYMLINK libspdk_nbd.so 00:03:20.231 CC lib/ftl/ftl_band.o 00:03:20.231 CC lib/ftl/ftl_band_ops.o 00:03:20.231 CC lib/nvmf/ctrlr_discovery.o 00:03:20.231 LIB libspdk_ublk.a 00:03:20.231 SO libspdk_ublk.so.2.0 00:03:20.231 CC lib/nvmf/ctrlr_bdev.o 00:03:20.231 SYMLINK libspdk_ublk.so 00:03:20.231 CC lib/nvmf/subsystem.o 00:03:20.231 CC lib/nvmf/nvmf.o 00:03:20.231 CC lib/nvmf/nvmf_rpc.o 00:03:20.490 LIB libspdk_scsi.a 00:03:20.491 CC lib/ftl/ftl_writer.o 00:03:20.491 CC lib/ftl/ftl_rq.o 00:03:20.491 SO libspdk_scsi.so.8.0 00:03:20.749 SYMLINK libspdk_scsi.so 00:03:20.749 CC lib/ftl/ftl_reloc.o 00:03:20.749 CC lib/nvmf/transport.o 00:03:20.749 CC lib/iscsi/conn.o 00:03:20.749 CC lib/vhost/vhost.o 00:03:21.008 CC lib/iscsi/init_grp.o 00:03:21.008 CC lib/vhost/vhost_rpc.o 00:03:21.008 CC lib/ftl/ftl_l2p_cache.o 00:03:21.266 CC lib/nvmf/tcp.o 00:03:21.266 CC lib/nvmf/vfio_user.o 00:03:21.266 CC lib/iscsi/iscsi.o 00:03:21.266 CC lib/nvmf/rdma.o 00:03:21.266 CC lib/iscsi/md5.o 00:03:21.525 CC lib/iscsi/param.o 00:03:21.525 CC lib/iscsi/portal_grp.o 00:03:21.525 CC lib/iscsi/tgt_node.o 00:03:21.525 CC lib/iscsi/iscsi_subsystem.o 00:03:21.525 CC lib/ftl/ftl_p2l.o 00:03:21.784 CC lib/vhost/vhost_scsi.o 00:03:21.784 CC lib/vhost/vhost_blk.o 00:03:21.784 CC lib/ftl/mngt/ftl_mngt.o 00:03:22.042 CC lib/vhost/rte_vhost_user.o 00:03:22.042 CC lib/iscsi/iscsi_rpc.o 00:03:22.042 CC lib/iscsi/task.o 00:03:22.042 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:22.301 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.301 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.301 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:22.301 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:22.560 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:22.560 LIB libspdk_iscsi.a 00:03:22.560 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:22.819 SO libspdk_iscsi.so.7.0 00:03:22.819 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:22.819 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:22.819 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:22.819 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:22.819 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:22.819 SYMLINK libspdk_iscsi.so 00:03:22.819 CC lib/ftl/utils/ftl_conf.o 00:03:22.819 CC lib/ftl/utils/ftl_md.o 00:03:22.819 CC lib/ftl/utils/ftl_mempool.o 00:03:23.078 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.078 CC lib/ftl/utils/ftl_property.o 00:03:23.078 LIB libspdk_vhost.a 00:03:23.078 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.078 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.078 SO libspdk_vhost.so.7.1 00:03:23.078 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.078 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.078 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.078 SYMLINK libspdk_vhost.so 00:03:23.078 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.078 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.336 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.336 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.336 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.337 CC lib/ftl/base/ftl_base_dev.o 00:03:23.337 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.337 LIB libspdk_nvmf.a 00:03:23.337 CC lib/ftl/ftl_trace.o 00:03:23.337 SO libspdk_nvmf.so.17.0 00:03:23.595 LIB libspdk_ftl.a 00:03:23.595 SYMLINK libspdk_nvmf.so 00:03:23.853 SO libspdk_ftl.so.8.0 00:03:24.112 SYMLINK libspdk_ftl.so 00:03:24.112 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.112 CC module/vfu_device/vfu_virtio.o 00:03:24.371 CC module/scheduler/gscheduler/gscheduler.o 00:03:24.371 CC module/accel/error/accel_error.o 00:03:24.371 CC module/blob/bdev/blob_bdev.o 00:03:24.371 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.371 CC module/accel/ioat/accel_ioat.o 00:03:24.371 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.371 CC module/accel/dsa/accel_dsa.o 00:03:24.371 CC module/sock/posix/posix.o 00:03:24.371 LIB libspdk_env_dpdk_rpc.a 00:03:24.371 SO libspdk_env_dpdk_rpc.so.5.0 00:03:24.371 LIB libspdk_scheduler_gscheduler.a 00:03:24.371 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.371 SO libspdk_scheduler_gscheduler.so.3.0 00:03:24.371 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:24.371 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.371 CC module/accel/error/accel_error_rpc.o 00:03:24.371 CC module/accel/ioat/accel_ioat_rpc.o 00:03:24.371 SYMLINK libspdk_scheduler_gscheduler.so 00:03:24.371 CC module/accel/dsa/accel_dsa_rpc.o 00:03:24.371 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:24.371 CC module/vfu_device/vfu_virtio_blk.o 00:03:24.371 CC module/vfu_device/vfu_virtio_scsi.o 00:03:24.371 LIB libspdk_scheduler_dynamic.a 00:03:24.630 SO libspdk_scheduler_dynamic.so.3.0 00:03:24.630 CC module/vfu_device/vfu_virtio_rpc.o 00:03:24.630 LIB libspdk_blob_bdev.a 00:03:24.630 SYMLINK libspdk_scheduler_dynamic.so 00:03:24.630 SO libspdk_blob_bdev.so.10.1 00:03:24.630 LIB libspdk_accel_ioat.a 00:03:24.630 LIB libspdk_accel_dsa.a 00:03:24.630 LIB libspdk_accel_error.a 00:03:24.630 SO libspdk_accel_ioat.so.5.0 00:03:24.630 SO libspdk_accel_dsa.so.4.0 00:03:24.630 SO libspdk_accel_error.so.1.0 00:03:24.630 SYMLINK libspdk_blob_bdev.so 00:03:24.630 SYMLINK libspdk_accel_ioat.so 00:03:24.630 SYMLINK libspdk_accel_dsa.so 00:03:24.630 SYMLINK libspdk_accel_error.so 00:03:24.630 CC module/accel/iaa/accel_iaa.o 00:03:24.630 CC module/accel/iaa/accel_iaa_rpc.o 00:03:24.889 LIB libspdk_vfu_device.a 00:03:24.889 CC module/bdev/error/vbdev_error.o 00:03:24.889 CC module/bdev/gpt/gpt.o 00:03:24.889 CC module/bdev/error/vbdev_error_rpc.o 00:03:24.889 CC module/bdev/delay/vbdev_delay.o 00:03:24.889 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:24.889 CC module/blobfs/bdev/blobfs_bdev.o 00:03:24.889 CC module/bdev/lvol/vbdev_lvol.o 00:03:24.889 SO libspdk_vfu_device.so.2.0 00:03:24.889 LIB libspdk_accel_iaa.a 00:03:24.889 SO libspdk_accel_iaa.so.2.0 00:03:24.889 LIB libspdk_sock_posix.a 00:03:24.889 SYMLINK libspdk_vfu_device.so 00:03:25.148 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.148 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.148 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.148 SYMLINK libspdk_accel_iaa.so 00:03:25.148 SO libspdk_sock_posix.so.5.0 00:03:25.148 LIB libspdk_bdev_error.a 00:03:25.148 SYMLINK libspdk_sock_posix.so 00:03:25.148 SO libspdk_bdev_error.so.5.0 00:03:25.148 CC module/bdev/malloc/bdev_malloc.o 00:03:25.148 CC module/bdev/null/bdev_null.o 00:03:25.148 CC module/bdev/nvme/bdev_nvme.o 00:03:25.148 LIB libspdk_bdev_delay.a 00:03:25.148 SYMLINK libspdk_bdev_error.so 00:03:25.148 LIB libspdk_blobfs_bdev.a 00:03:25.148 SO libspdk_bdev_delay.so.5.0 00:03:25.148 CC module/bdev/passthru/vbdev_passthru.o 00:03:25.407 SO libspdk_blobfs_bdev.so.5.0 00:03:25.407 SYMLINK libspdk_bdev_delay.so 00:03:25.407 LIB libspdk_bdev_gpt.a 00:03:25.407 CC module/bdev/raid/bdev_raid.o 00:03:25.407 SYMLINK libspdk_blobfs_bdev.so 00:03:25.407 SO libspdk_bdev_gpt.so.5.0 00:03:25.407 LIB libspdk_bdev_lvol.a 00:03:25.407 SO libspdk_bdev_lvol.so.5.0 00:03:25.407 SYMLINK libspdk_bdev_gpt.so 00:03:25.407 CC module/bdev/split/vbdev_split.o 00:03:25.407 CC module/bdev/null/bdev_null_rpc.o 00:03:25.407 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:25.407 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:25.407 CC module/bdev/aio/bdev_aio.o 00:03:25.407 SYMLINK libspdk_bdev_lvol.so 00:03:25.407 CC module/bdev/aio/bdev_aio_rpc.o 00:03:25.665 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.665 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:25.665 LIB libspdk_bdev_null.a 00:03:25.665 CC module/bdev/split/vbdev_split_rpc.o 00:03:25.665 CC module/bdev/raid/bdev_raid_rpc.o 00:03:25.665 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.665 SO libspdk_bdev_null.so.5.0 00:03:25.665 LIB libspdk_bdev_malloc.a 00:03:25.665 SO libspdk_bdev_malloc.so.5.0 00:03:25.665 SYMLINK libspdk_bdev_null.so 00:03:25.665 LIB libspdk_bdev_passthru.a 00:03:25.665 CC module/bdev/nvme/nvme_rpc.o 00:03:25.665 LIB libspdk_bdev_aio.a 00:03:25.665 SO libspdk_bdev_passthru.so.5.0 00:03:25.923 SYMLINK libspdk_bdev_malloc.so 00:03:25.923 CC module/bdev/nvme/bdev_mdns_client.o 00:03:25.923 LIB libspdk_bdev_zone_block.a 00:03:25.923 LIB libspdk_bdev_split.a 00:03:25.923 SO libspdk_bdev_aio.so.5.0 00:03:25.923 SO libspdk_bdev_zone_block.so.5.0 00:03:25.923 SO libspdk_bdev_split.so.5.0 00:03:25.923 SYMLINK libspdk_bdev_passthru.so 00:03:25.923 CC module/bdev/nvme/vbdev_opal.o 00:03:25.923 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:25.923 SYMLINK libspdk_bdev_aio.so 00:03:25.923 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:25.923 SYMLINK libspdk_bdev_zone_block.so 00:03:25.923 SYMLINK libspdk_bdev_split.so 00:03:25.923 CC module/bdev/raid/bdev_raid_sb.o 00:03:25.923 CC module/bdev/ftl/bdev_ftl.o 00:03:26.182 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:26.182 CC module/bdev/raid/raid0.o 00:03:26.182 CC module/bdev/iscsi/bdev_iscsi.o 00:03:26.182 CC module/bdev/raid/raid1.o 00:03:26.182 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:26.182 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:26.182 CC module/bdev/raid/concat.o 00:03:26.182 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.182 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.440 LIB libspdk_bdev_ftl.a 00:03:26.440 SO libspdk_bdev_ftl.so.5.0 00:03:26.440 SYMLINK libspdk_bdev_ftl.so 00:03:26.440 LIB libspdk_bdev_raid.a 00:03:26.440 LIB libspdk_bdev_iscsi.a 00:03:26.440 SO libspdk_bdev_iscsi.so.5.0 00:03:26.440 SO libspdk_bdev_raid.so.5.0 00:03:26.699 SYMLINK libspdk_bdev_iscsi.so 00:03:26.699 SYMLINK libspdk_bdev_raid.so 00:03:26.699 LIB libspdk_bdev_virtio.a 00:03:26.699 SO libspdk_bdev_virtio.so.5.0 00:03:26.699 SYMLINK libspdk_bdev_virtio.so 00:03:27.266 LIB libspdk_bdev_nvme.a 00:03:27.525 SO libspdk_bdev_nvme.so.6.0 00:03:27.525 SYMLINK libspdk_bdev_nvme.so 00:03:27.783 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:27.783 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:27.783 CC module/event/subsystems/vmd/vmd.o 00:03:27.783 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:27.783 CC module/event/subsystems/sock/sock.o 00:03:27.783 CC module/event/subsystems/iobuf/iobuf.o 00:03:27.783 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:27.783 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.043 LIB libspdk_event_scheduler.a 00:03:28.043 LIB libspdk_event_vhost_blk.a 00:03:28.043 LIB libspdk_event_sock.a 00:03:28.043 LIB libspdk_event_vfu_tgt.a 00:03:28.043 LIB libspdk_event_vmd.a 00:03:28.043 LIB libspdk_event_iobuf.a 00:03:28.043 SO libspdk_event_vhost_blk.so.2.0 00:03:28.043 SO libspdk_event_sock.so.4.0 00:03:28.043 SO libspdk_event_scheduler.so.3.0 00:03:28.043 SO libspdk_event_vfu_tgt.so.2.0 00:03:28.043 SO libspdk_event_vmd.so.5.0 00:03:28.043 SO libspdk_event_iobuf.so.2.0 00:03:28.043 SYMLINK libspdk_event_vfu_tgt.so 00:03:28.043 SYMLINK libspdk_event_vhost_blk.so 00:03:28.043 SYMLINK libspdk_event_scheduler.so 00:03:28.043 SYMLINK libspdk_event_sock.so 00:03:28.043 SYMLINK libspdk_event_iobuf.so 00:03:28.043 SYMLINK libspdk_event_vmd.so 00:03:28.303 CC module/event/subsystems/accel/accel.o 00:03:28.567 LIB libspdk_event_accel.a 00:03:28.567 SO libspdk_event_accel.so.5.0 00:03:28.567 SYMLINK libspdk_event_accel.so 00:03:28.826 CC module/event/subsystems/bdev/bdev.o 00:03:29.084 LIB libspdk_event_bdev.a 00:03:29.084 SO libspdk_event_bdev.so.5.0 00:03:29.084 SYMLINK libspdk_event_bdev.so 00:03:29.342 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.342 CC module/event/subsystems/ublk/ublk.o 00:03:29.342 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.342 CC module/event/subsystems/nbd/nbd.o 00:03:29.342 CC module/event/subsystems/scsi/scsi.o 00:03:29.342 LIB libspdk_event_ublk.a 00:03:29.342 LIB libspdk_event_nbd.a 00:03:29.342 LIB libspdk_event_scsi.a 00:03:29.342 SO libspdk_event_ublk.so.2.0 00:03:29.342 SO libspdk_event_nbd.so.5.0 00:03:29.342 SO libspdk_event_scsi.so.5.0 00:03:29.601 SYMLINK libspdk_event_ublk.so 00:03:29.601 SYMLINK libspdk_event_nbd.so 00:03:29.601 SYMLINK libspdk_event_scsi.so 00:03:29.601 LIB libspdk_event_nvmf.a 00:03:29.601 SO libspdk_event_nvmf.so.5.0 00:03:29.601 SYMLINK libspdk_event_nvmf.so 00:03:29.601 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:29.601 CC module/event/subsystems/iscsi/iscsi.o 00:03:29.860 LIB libspdk_event_vhost_scsi.a 00:03:29.860 LIB libspdk_event_iscsi.a 00:03:29.860 SO libspdk_event_vhost_scsi.so.2.0 00:03:29.860 SO libspdk_event_iscsi.so.5.0 00:03:29.860 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.119 SYMLINK libspdk_event_iscsi.so 00:03:30.119 SO libspdk.so.5.0 00:03:30.119 SYMLINK libspdk.so 00:03:30.379 CXX app/trace/trace.o 00:03:30.379 CC app/trace_record/trace_record.o 00:03:30.379 CC app/nvmf_tgt/nvmf_main.o 00:03:30.379 CC app/iscsi_tgt/iscsi_tgt.o 00:03:30.379 CC app/spdk_tgt/spdk_tgt.o 00:03:30.379 CC examples/accel/perf/accel_perf.o 00:03:30.379 CC test/blobfs/mkfs/mkfs.o 00:03:30.379 CC test/app/bdev_svc/bdev_svc.o 00:03:30.379 CC test/bdev/bdevio/bdevio.o 00:03:30.379 CC test/accel/dif/dif.o 00:03:30.637 LINK nvmf_tgt 00:03:30.637 LINK spdk_trace_record 00:03:30.637 LINK iscsi_tgt 00:03:30.637 LINK spdk_tgt 00:03:30.637 LINK bdev_svc 00:03:30.637 LINK mkfs 00:03:30.637 LINK spdk_trace 00:03:30.896 LINK dif 00:03:30.896 LINK accel_perf 00:03:30.896 LINK bdevio 00:03:30.896 CC test/app/histogram_perf/histogram_perf.o 00:03:30.896 CC test/app/jsoncat/jsoncat.o 00:03:30.896 CC examples/bdev/hello_world/hello_bdev.o 00:03:30.896 CC examples/bdev/bdevperf/bdevperf.o 00:03:30.896 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:30.896 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:31.154 CC app/spdk_lspci/spdk_lspci.o 00:03:31.154 LINK jsoncat 00:03:31.154 LINK histogram_perf 00:03:31.154 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:31.154 TEST_HEADER include/spdk/accel.h 00:03:31.154 TEST_HEADER include/spdk/accel_module.h 00:03:31.154 TEST_HEADER include/spdk/assert.h 00:03:31.154 TEST_HEADER include/spdk/barrier.h 00:03:31.154 TEST_HEADER include/spdk/base64.h 00:03:31.154 TEST_HEADER include/spdk/bdev.h 00:03:31.154 TEST_HEADER include/spdk/bdev_module.h 00:03:31.155 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.155 TEST_HEADER include/spdk/bit_array.h 00:03:31.155 LINK hello_bdev 00:03:31.155 TEST_HEADER include/spdk/bit_pool.h 00:03:31.155 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.155 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.155 TEST_HEADER include/spdk/blobfs.h 00:03:31.155 TEST_HEADER include/spdk/blob.h 00:03:31.155 TEST_HEADER include/spdk/conf.h 00:03:31.155 TEST_HEADER include/spdk/config.h 00:03:31.155 TEST_HEADER include/spdk/cpuset.h 00:03:31.155 TEST_HEADER include/spdk/crc16.h 00:03:31.155 TEST_HEADER include/spdk/crc32.h 00:03:31.155 TEST_HEADER include/spdk/crc64.h 00:03:31.155 TEST_HEADER include/spdk/dif.h 00:03:31.155 TEST_HEADER include/spdk/dma.h 00:03:31.155 TEST_HEADER include/spdk/endian.h 00:03:31.155 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.155 LINK spdk_lspci 00:03:31.155 TEST_HEADER include/spdk/env.h 00:03:31.155 TEST_HEADER include/spdk/event.h 00:03:31.155 TEST_HEADER include/spdk/fd_group.h 00:03:31.155 TEST_HEADER include/spdk/fd.h 00:03:31.155 TEST_HEADER include/spdk/file.h 00:03:31.155 TEST_HEADER include/spdk/ftl.h 00:03:31.155 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.155 TEST_HEADER include/spdk/hexlify.h 00:03:31.155 TEST_HEADER include/spdk/histogram_data.h 00:03:31.155 TEST_HEADER include/spdk/idxd.h 00:03:31.155 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.155 TEST_HEADER include/spdk/init.h 00:03:31.155 TEST_HEADER include/spdk/ioat.h 00:03:31.155 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.155 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.155 TEST_HEADER include/spdk/json.h 00:03:31.155 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.155 TEST_HEADER include/spdk/likely.h 00:03:31.155 TEST_HEADER include/spdk/log.h 00:03:31.155 TEST_HEADER include/spdk/lvol.h 00:03:31.155 TEST_HEADER include/spdk/memory.h 00:03:31.155 TEST_HEADER include/spdk/mmio.h 00:03:31.155 TEST_HEADER include/spdk/nbd.h 00:03:31.155 TEST_HEADER include/spdk/notify.h 00:03:31.155 CC test/dma/test_dma/test_dma.o 00:03:31.155 TEST_HEADER include/spdk/nvme.h 00:03:31.155 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.155 CC test/app/stub/stub.o 00:03:31.155 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.155 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.155 CC app/spdk_nvme_perf/perf.o 00:03:31.155 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.155 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.155 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:31.155 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.155 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.155 TEST_HEADER include/spdk/nvmf.h 00:03:31.155 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.155 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.155 TEST_HEADER include/spdk/opal.h 00:03:31.155 TEST_HEADER include/spdk/opal_spec.h 00:03:31.155 TEST_HEADER include/spdk/pci_ids.h 00:03:31.155 TEST_HEADER include/spdk/pipe.h 00:03:31.155 TEST_HEADER include/spdk/queue.h 00:03:31.155 TEST_HEADER include/spdk/reduce.h 00:03:31.155 TEST_HEADER include/spdk/rpc.h 00:03:31.155 TEST_HEADER include/spdk/scheduler.h 00:03:31.155 TEST_HEADER include/spdk/scsi.h 00:03:31.413 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.413 TEST_HEADER include/spdk/sock.h 00:03:31.413 TEST_HEADER include/spdk/stdinc.h 00:03:31.413 TEST_HEADER include/spdk/string.h 00:03:31.413 TEST_HEADER include/spdk/thread.h 00:03:31.413 TEST_HEADER include/spdk/trace.h 00:03:31.413 TEST_HEADER include/spdk/trace_parser.h 00:03:31.413 TEST_HEADER include/spdk/tree.h 00:03:31.413 TEST_HEADER include/spdk/ublk.h 00:03:31.413 TEST_HEADER include/spdk/util.h 00:03:31.413 TEST_HEADER include/spdk/uuid.h 00:03:31.413 TEST_HEADER include/spdk/version.h 00:03:31.413 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:31.413 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:31.413 TEST_HEADER include/spdk/vhost.h 00:03:31.413 LINK nvme_fuzz 00:03:31.413 TEST_HEADER include/spdk/vmd.h 00:03:31.413 TEST_HEADER include/spdk/xor.h 00:03:31.413 TEST_HEADER include/spdk/zipf.h 00:03:31.413 CXX test/cpp_headers/accel.o 00:03:31.413 CC app/spdk_nvme_identify/identify.o 00:03:31.413 CC app/spdk_nvme_discover/discovery_aer.o 00:03:31.413 LINK stub 00:03:31.413 CXX test/cpp_headers/accel_module.o 00:03:31.672 CC app/spdk_top/spdk_top.o 00:03:31.672 LINK test_dma 00:03:31.672 LINK bdevperf 00:03:31.672 LINK spdk_nvme_discover 00:03:31.672 LINK vhost_fuzz 00:03:31.672 CXX test/cpp_headers/assert.o 00:03:31.672 CC app/vhost/vhost.o 00:03:31.931 CXX test/cpp_headers/barrier.o 00:03:31.931 LINK vhost 00:03:31.931 CC test/event/event_perf/event_perf.o 00:03:31.931 CC examples/blob/hello_world/hello_blob.o 00:03:31.931 CC examples/ioat/perf/perf.o 00:03:31.931 CC test/env/mem_callbacks/mem_callbacks.o 00:03:31.931 LINK spdk_nvme_perf 00:03:31.931 CXX test/cpp_headers/base64.o 00:03:32.190 LINK event_perf 00:03:32.190 LINK spdk_nvme_identify 00:03:32.190 LINK ioat_perf 00:03:32.190 CXX test/cpp_headers/bdev.o 00:03:32.190 LINK hello_blob 00:03:32.190 CC examples/ioat/verify/verify.o 00:03:32.190 CC test/event/reactor/reactor.o 00:03:32.190 CC test/event/reactor_perf/reactor_perf.o 00:03:32.448 CC test/lvol/esnap/esnap.o 00:03:32.448 CXX test/cpp_headers/bdev_module.o 00:03:32.448 LINK reactor 00:03:32.448 CC test/event/app_repeat/app_repeat.o 00:03:32.448 LINK verify 00:03:32.448 LINK spdk_top 00:03:32.448 LINK reactor_perf 00:03:32.448 CC examples/blob/cli/blobcli.o 00:03:32.448 LINK iscsi_fuzz 00:03:32.448 LINK app_repeat 00:03:32.448 CXX test/cpp_headers/bdev_zone.o 00:03:32.448 LINK mem_callbacks 00:03:32.706 CC test/nvme/aer/aer.o 00:03:32.706 CC examples/nvme/hello_world/hello_world.o 00:03:32.706 CC examples/sock/hello_world/hello_sock.o 00:03:32.706 CC app/spdk_dd/spdk_dd.o 00:03:32.706 CXX test/cpp_headers/bit_array.o 00:03:32.706 CC test/env/vtophys/vtophys.o 00:03:32.706 CC examples/nvme/reconnect/reconnect.o 00:03:32.706 CC test/event/scheduler/scheduler.o 00:03:32.964 LINK hello_world 00:03:32.964 LINK vtophys 00:03:32.964 CXX test/cpp_headers/bit_pool.o 00:03:32.964 LINK hello_sock 00:03:32.964 LINK aer 00:03:32.964 LINK blobcli 00:03:32.964 LINK spdk_dd 00:03:32.964 LINK scheduler 00:03:32.964 CXX test/cpp_headers/blob_bdev.o 00:03:33.222 CC test/nvme/reset/reset.o 00:03:33.222 CC test/rpc_client/rpc_client_test.o 00:03:33.222 LINK reconnect 00:03:33.222 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.222 CC app/fio/nvme/fio_plugin.o 00:03:33.222 CC app/fio/bdev/fio_plugin.o 00:03:33.222 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.222 LINK env_dpdk_post_init 00:03:33.222 LINK rpc_client_test 00:03:33.222 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:33.222 CC examples/nvme/arbitration/arbitration.o 00:03:33.479 LINK reset 00:03:33.479 CC test/thread/poller_perf/poller_perf.o 00:03:33.479 CXX test/cpp_headers/blobfs.o 00:03:33.479 CC test/env/memory/memory_ut.o 00:03:33.479 CC test/nvme/sgl/sgl.o 00:03:33.479 LINK poller_perf 00:03:33.479 CC test/env/pci/pci_ut.o 00:03:33.736 CXX test/cpp_headers/blob.o 00:03:33.736 LINK arbitration 00:03:33.736 LINK spdk_bdev 00:03:33.736 LINK spdk_nvme 00:03:33.736 LINK nvme_manage 00:03:33.736 LINK sgl 00:03:33.736 CXX test/cpp_headers/conf.o 00:03:33.736 CC examples/vmd/lsvmd/lsvmd.o 00:03:33.736 CC test/nvme/e2edp/nvme_dp.o 00:03:33.993 CC test/nvme/overhead/overhead.o 00:03:33.993 LINK lsvmd 00:03:33.993 LINK pci_ut 00:03:33.993 CC examples/nvme/hotplug/hotplug.o 00:03:33.993 CXX test/cpp_headers/config.o 00:03:33.993 CXX test/cpp_headers/cpuset.o 00:03:33.993 CC examples/nvmf/nvmf/nvmf.o 00:03:33.993 CC examples/vmd/led/led.o 00:03:33.993 LINK nvme_dp 00:03:34.251 CXX test/cpp_headers/crc16.o 00:03:34.251 LINK led 00:03:34.251 LINK overhead 00:03:34.251 CC examples/util/zipf/zipf.o 00:03:34.251 LINK hotplug 00:03:34.251 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:34.251 LINK nvmf 00:03:34.251 CC test/nvme/err_injection/err_injection.o 00:03:34.251 CXX test/cpp_headers/crc32.o 00:03:34.251 LINK zipf 00:03:34.251 CC test/nvme/startup/startup.o 00:03:34.251 LINK memory_ut 00:03:34.251 CC test/nvme/reserve/reserve.o 00:03:34.509 CC examples/nvme/abort/abort.o 00:03:34.509 LINK cmb_copy 00:03:34.509 CXX test/cpp_headers/crc64.o 00:03:34.509 LINK err_injection 00:03:34.509 LINK startup 00:03:34.509 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:34.509 CXX test/cpp_headers/dif.o 00:03:34.509 LINK reserve 00:03:34.768 CC examples/thread/thread/thread_ex.o 00:03:34.768 CC test/nvme/simple_copy/simple_copy.o 00:03:34.768 CC test/nvme/connect_stress/connect_stress.o 00:03:34.768 CXX test/cpp_headers/dma.o 00:03:34.768 LINK pmr_persistence 00:03:34.768 LINK abort 00:03:34.768 CC examples/idxd/perf/perf.o 00:03:34.768 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.768 CXX test/cpp_headers/endian.o 00:03:35.026 LINK simple_copy 00:03:35.026 LINK thread 00:03:35.027 LINK connect_stress 00:03:35.027 CXX test/cpp_headers/env_dpdk.o 00:03:35.027 CC test/nvme/boot_partition/boot_partition.o 00:03:35.027 LINK interrupt_tgt 00:03:35.027 CXX test/cpp_headers/env.o 00:03:35.027 CC test/nvme/compliance/nvme_compliance.o 00:03:35.027 CXX test/cpp_headers/event.o 00:03:35.027 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.027 LINK idxd_perf 00:03:35.285 LINK boot_partition 00:03:35.285 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.285 CXX test/cpp_headers/fd_group.o 00:03:35.285 CC test/nvme/fdp/fdp.o 00:03:35.285 CXX test/cpp_headers/fd.o 00:03:35.285 CXX test/cpp_headers/file.o 00:03:35.285 LINK fused_ordering 00:03:35.285 CC test/nvme/cuse/cuse.o 00:03:35.285 CXX test/cpp_headers/ftl.o 00:03:35.285 LINK doorbell_aers 00:03:35.285 LINK nvme_compliance 00:03:35.543 CXX test/cpp_headers/gpt_spec.o 00:03:35.543 CXX test/cpp_headers/hexlify.o 00:03:35.543 CXX test/cpp_headers/histogram_data.o 00:03:35.543 LINK fdp 00:03:35.543 CXX test/cpp_headers/idxd.o 00:03:35.543 CXX test/cpp_headers/idxd_spec.o 00:03:35.543 CXX test/cpp_headers/init.o 00:03:35.543 CXX test/cpp_headers/ioat.o 00:03:35.543 CXX test/cpp_headers/ioat_spec.o 00:03:35.543 CXX test/cpp_headers/iscsi_spec.o 00:03:35.802 CXX test/cpp_headers/json.o 00:03:35.802 CXX test/cpp_headers/jsonrpc.o 00:03:35.802 CXX test/cpp_headers/likely.o 00:03:35.802 CXX test/cpp_headers/log.o 00:03:35.802 CXX test/cpp_headers/lvol.o 00:03:35.802 CXX test/cpp_headers/memory.o 00:03:35.802 CXX test/cpp_headers/mmio.o 00:03:35.802 CXX test/cpp_headers/nbd.o 00:03:35.802 CXX test/cpp_headers/notify.o 00:03:35.802 CXX test/cpp_headers/nvme.o 00:03:35.802 CXX test/cpp_headers/nvme_intel.o 00:03:35.802 CXX test/cpp_headers/nvme_ocssd.o 00:03:35.802 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:36.059 CXX test/cpp_headers/nvme_spec.o 00:03:36.059 CXX test/cpp_headers/nvme_zns.o 00:03:36.059 CXX test/cpp_headers/nvmf_cmd.o 00:03:36.059 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:36.059 CXX test/cpp_headers/nvmf.o 00:03:36.059 CXX test/cpp_headers/nvmf_spec.o 00:03:36.317 CXX test/cpp_headers/nvmf_transport.o 00:03:36.317 CXX test/cpp_headers/opal.o 00:03:36.317 CXX test/cpp_headers/opal_spec.o 00:03:36.317 CXX test/cpp_headers/pci_ids.o 00:03:36.317 CXX test/cpp_headers/pipe.o 00:03:36.317 CXX test/cpp_headers/queue.o 00:03:36.317 CXX test/cpp_headers/reduce.o 00:03:36.317 CXX test/cpp_headers/rpc.o 00:03:36.317 CXX test/cpp_headers/scheduler.o 00:03:36.317 LINK cuse 00:03:36.317 CXX test/cpp_headers/scsi.o 00:03:36.575 CXX test/cpp_headers/scsi_spec.o 00:03:36.575 CXX test/cpp_headers/sock.o 00:03:36.575 CXX test/cpp_headers/stdinc.o 00:03:36.575 CXX test/cpp_headers/string.o 00:03:36.575 CXX test/cpp_headers/thread.o 00:03:36.575 CXX test/cpp_headers/trace.o 00:03:36.575 CXX test/cpp_headers/trace_parser.o 00:03:36.575 CXX test/cpp_headers/tree.o 00:03:36.575 CXX test/cpp_headers/ublk.o 00:03:36.575 CXX test/cpp_headers/util.o 00:03:36.575 CXX test/cpp_headers/uuid.o 00:03:36.575 CXX test/cpp_headers/version.o 00:03:36.575 CXX test/cpp_headers/vfio_user_pci.o 00:03:36.833 CXX test/cpp_headers/vfio_user_spec.o 00:03:36.833 CXX test/cpp_headers/vhost.o 00:03:36.834 CXX test/cpp_headers/vmd.o 00:03:36.834 CXX test/cpp_headers/xor.o 00:03:36.834 LINK esnap 00:03:36.834 CXX test/cpp_headers/zipf.o 00:03:42.105 00:03:42.105 real 1m3.863s 00:03:42.105 user 6m32.595s 00:03:42.105 sys 1m37.595s 00:03:42.105 02:20:21 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:42.105 02:20:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.105 ************************************ 00:03:42.105 END TEST make 00:03:42.105 ************************************ 00:03:42.105 02:20:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:42.105 02:20:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:42.105 02:20:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:42.105 02:20:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:42.105 02:20:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:42.105 02:20:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:42.105 02:20:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:42.105 02:20:21 -- scripts/common.sh@335 -- # IFS=.-: 00:03:42.105 02:20:21 -- scripts/common.sh@335 -- # read -ra ver1 00:03:42.105 02:20:21 -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.105 02:20:21 -- scripts/common.sh@336 -- # read -ra ver2 00:03:42.106 02:20:21 -- scripts/common.sh@337 -- # local 'op=<' 00:03:42.106 02:20:21 -- scripts/common.sh@339 -- # ver1_l=2 00:03:42.106 02:20:21 -- scripts/common.sh@340 -- # ver2_l=1 00:03:42.106 02:20:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:42.106 02:20:21 -- scripts/common.sh@343 -- # case "$op" in 00:03:42.106 02:20:21 -- scripts/common.sh@344 -- # : 1 00:03:42.106 02:20:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:42.106 02:20:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.106 02:20:21 -- scripts/common.sh@364 -- # decimal 1 00:03:42.106 02:20:21 -- scripts/common.sh@352 -- # local d=1 00:03:42.106 02:20:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.106 02:20:21 -- scripts/common.sh@354 -- # echo 1 00:03:42.106 02:20:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:42.106 02:20:21 -- scripts/common.sh@365 -- # decimal 2 00:03:42.106 02:20:21 -- scripts/common.sh@352 -- # local d=2 00:03:42.106 02:20:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.106 02:20:21 -- scripts/common.sh@354 -- # echo 2 00:03:42.106 02:20:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:42.106 02:20:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:42.106 02:20:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:42.106 02:20:21 -- scripts/common.sh@367 -- # return 0 00:03:42.106 02:20:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.106 02:20:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.106 --rc genhtml_branch_coverage=1 00:03:42.106 --rc genhtml_function_coverage=1 00:03:42.106 --rc genhtml_legend=1 00:03:42.106 --rc geninfo_all_blocks=1 00:03:42.106 --rc geninfo_unexecuted_blocks=1 00:03:42.106 00:03:42.106 ' 00:03:42.106 02:20:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.106 --rc genhtml_branch_coverage=1 00:03:42.106 --rc genhtml_function_coverage=1 00:03:42.106 --rc genhtml_legend=1 00:03:42.106 --rc geninfo_all_blocks=1 00:03:42.106 --rc geninfo_unexecuted_blocks=1 00:03:42.106 00:03:42.106 ' 00:03:42.106 02:20:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.106 --rc genhtml_branch_coverage=1 00:03:42.106 --rc genhtml_function_coverage=1 00:03:42.106 --rc genhtml_legend=1 00:03:42.106 --rc geninfo_all_blocks=1 00:03:42.106 --rc geninfo_unexecuted_blocks=1 00:03:42.106 00:03:42.106 ' 00:03:42.106 02:20:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:42.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.106 --rc genhtml_branch_coverage=1 00:03:42.106 --rc genhtml_function_coverage=1 00:03:42.106 --rc genhtml_legend=1 00:03:42.106 --rc geninfo_all_blocks=1 00:03:42.106 --rc geninfo_unexecuted_blocks=1 00:03:42.106 00:03:42.106 ' 00:03:42.106 02:20:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:42.106 02:20:21 -- nvmf/common.sh@7 -- # uname -s 00:03:42.106 02:20:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.106 02:20:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.106 02:20:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.106 02:20:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.106 02:20:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.106 02:20:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.106 02:20:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.106 02:20:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.106 02:20:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.106 02:20:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.106 02:20:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:03:42.106 02:20:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:03:42.106 02:20:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.106 02:20:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.106 02:20:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:42.106 02:20:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:42.106 02:20:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.106 02:20:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.106 02:20:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.106 02:20:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.106 02:20:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.106 02:20:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.106 02:20:21 -- paths/export.sh@5 -- # export PATH 00:03:42.106 02:20:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.106 02:20:21 -- nvmf/common.sh@46 -- # : 0 00:03:42.106 02:20:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:42.106 02:20:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:42.106 02:20:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:42.106 02:20:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.106 02:20:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.106 02:20:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:42.106 02:20:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:42.106 02:20:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:42.106 02:20:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:42.106 02:20:21 -- spdk/autotest.sh@32 -- # uname -s 00:03:42.106 02:20:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:42.106 02:20:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:42.106 02:20:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:42.106 02:20:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:42.106 02:20:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:42.106 02:20:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.106 02:20:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.106 02:20:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.106 02:20:21 -- spdk/autotest.sh@48 -- # udevadm_pid=49740 00:03:42.106 02:20:21 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:42.106 02:20:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.106 02:20:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:42.106 02:20:22 -- spdk/autotest.sh@54 -- # echo 49751 00:03:42.106 02:20:22 -- spdk/autotest.sh@56 -- # echo 49753 00:03:42.106 02:20:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:42.106 02:20:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:42.106 02:20:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.106 02:20:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:42.106 02:20:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.106 02:20:22 -- common/autotest_common.sh@10 -- # set +x 00:03:42.106 02:20:22 -- spdk/autotest.sh@70 -- # create_test_list 00:03:42.106 02:20:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:42.106 02:20:22 -- common/autotest_common.sh@10 -- # set +x 00:03:42.106 02:20:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.106 02:20:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.106 02:20:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.106 02:20:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.106 02:20:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.106 02:20:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:42.106 02:20:22 -- common/autotest_common.sh@1450 -- # uname 00:03:42.106 02:20:22 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:42.106 02:20:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:42.106 02:20:22 -- common/autotest_common.sh@1470 -- # uname 00:03:42.106 02:20:22 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:42.106 02:20:22 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:42.106 02:20:22 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:42.106 lcov: LCOV version 1.15 00:03:42.106 02:20:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:48.682 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:48.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:48.682 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:48.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:48.682 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:48.682 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:06.771 02:20:47 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:06.771 02:20:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:06.771 02:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:06.771 02:20:47 -- spdk/autotest.sh@89 -- # rm -f 00:04:06.771 02:20:47 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.708 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:07.708 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:07.708 02:20:48 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:07.708 02:20:48 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:07.708 02:20:48 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:07.708 02:20:48 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:07.708 02:20:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.708 02:20:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:07.708 02:20:48 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:07.708 02:20:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.708 02:20:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:07.708 02:20:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:07.708 02:20:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.708 02:20:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:07.708 02:20:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:07.708 02:20:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:07.708 02:20:48 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:07.708 02:20:48 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:07.708 02:20:48 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:07.708 02:20:48 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:07.708 02:20:48 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:07.708 02:20:48 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:07.708 02:20:48 -- spdk/autotest.sh@108 -- # grep -v p 00:04:07.708 02:20:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.708 02:20:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.708 02:20:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:07.708 02:20:48 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:07.708 02:20:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:07.708 No valid GPT data, bailing 00:04:07.708 02:20:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.708 02:20:48 -- scripts/common.sh@393 -- # pt= 00:04:07.708 02:20:48 -- scripts/common.sh@394 -- # return 1 00:04:07.708 02:20:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:07.708 1+0 records in 00:04:07.708 1+0 records out 00:04:07.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399574 s, 262 MB/s 00:04:07.708 02:20:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.708 02:20:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.708 02:20:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:04:07.708 02:20:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:07.708 02:20:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:07.708 No valid GPT data, bailing 00:04:07.708 02:20:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:07.708 02:20:48 -- scripts/common.sh@393 -- # pt= 00:04:07.708 02:20:48 -- scripts/common.sh@394 -- # return 1 00:04:07.708 02:20:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:07.967 1+0 records in 00:04:07.967 1+0 records out 00:04:07.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048062 s, 218 MB/s 00:04:07.967 02:20:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.967 02:20:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.967 02:20:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:04:07.967 02:20:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:07.967 02:20:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:07.967 No valid GPT data, bailing 00:04:07.967 02:20:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:07.967 02:20:48 -- scripts/common.sh@393 -- # pt= 00:04:07.967 02:20:48 -- scripts/common.sh@394 -- # return 1 00:04:07.967 02:20:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:07.967 1+0 records in 00:04:07.967 1+0 records out 00:04:07.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566159 s, 185 MB/s 00:04:07.967 02:20:48 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:07.967 02:20:48 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:07.967 02:20:48 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:04:07.967 02:20:48 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:07.967 02:20:48 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:07.967 No valid GPT data, bailing 00:04:07.967 02:20:48 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:07.967 02:20:48 -- scripts/common.sh@393 -- # pt= 00:04:07.967 02:20:48 -- scripts/common.sh@394 -- # return 1 00:04:07.967 02:20:48 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:07.967 1+0 records in 00:04:07.967 1+0 records out 00:04:07.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472955 s, 222 MB/s 00:04:07.967 02:20:48 -- spdk/autotest.sh@116 -- # sync 00:04:08.225 02:20:48 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.225 02:20:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.225 02:20:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.127 02:20:50 -- spdk/autotest.sh@122 -- # uname -s 00:04:10.127 02:20:50 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:10.127 02:20:50 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:10.127 02:20:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.127 02:20:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.127 ************************************ 00:04:10.127 START TEST setup.sh 00:04:10.127 ************************************ 00:04:10.127 02:20:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:10.127 * Looking for test storage... 00:04:10.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.127 02:20:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:10.127 02:20:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:10.127 02:20:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:10.127 02:20:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:10.127 02:20:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:10.127 02:20:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:10.127 02:20:50 -- scripts/common.sh@335 -- # IFS=.-: 00:04:10.127 02:20:50 -- scripts/common.sh@335 -- # read -ra ver1 00:04:10.127 02:20:50 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.127 02:20:50 -- scripts/common.sh@336 -- # read -ra ver2 00:04:10.127 02:20:50 -- scripts/common.sh@337 -- # local 'op=<' 00:04:10.127 02:20:50 -- scripts/common.sh@339 -- # ver1_l=2 00:04:10.127 02:20:50 -- scripts/common.sh@340 -- # ver2_l=1 00:04:10.127 02:20:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:10.127 02:20:50 -- scripts/common.sh@343 -- # case "$op" in 00:04:10.127 02:20:50 -- scripts/common.sh@344 -- # : 1 00:04:10.127 02:20:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:10.127 02:20:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.127 02:20:50 -- scripts/common.sh@364 -- # decimal 1 00:04:10.127 02:20:50 -- scripts/common.sh@352 -- # local d=1 00:04:10.127 02:20:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.127 02:20:50 -- scripts/common.sh@354 -- # echo 1 00:04:10.127 02:20:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:10.127 02:20:50 -- scripts/common.sh@365 -- # decimal 2 00:04:10.127 02:20:50 -- scripts/common.sh@352 -- # local d=2 00:04:10.127 02:20:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.127 02:20:50 -- scripts/common.sh@354 -- # echo 2 00:04:10.127 02:20:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:10.127 02:20:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:10.127 02:20:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:10.127 02:20:50 -- scripts/common.sh@367 -- # return 0 00:04:10.127 02:20:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:10.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.127 --rc genhtml_branch_coverage=1 00:04:10.127 --rc genhtml_function_coverage=1 00:04:10.127 --rc genhtml_legend=1 00:04:10.127 --rc geninfo_all_blocks=1 00:04:10.127 --rc geninfo_unexecuted_blocks=1 00:04:10.127 00:04:10.127 ' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:10.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.127 --rc genhtml_branch_coverage=1 00:04:10.127 --rc genhtml_function_coverage=1 00:04:10.127 --rc genhtml_legend=1 00:04:10.127 --rc geninfo_all_blocks=1 00:04:10.127 --rc geninfo_unexecuted_blocks=1 00:04:10.127 00:04:10.127 ' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:10.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.127 --rc genhtml_branch_coverage=1 00:04:10.127 --rc genhtml_function_coverage=1 00:04:10.127 --rc genhtml_legend=1 00:04:10.127 --rc geninfo_all_blocks=1 00:04:10.127 --rc geninfo_unexecuted_blocks=1 00:04:10.127 00:04:10.127 ' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:10.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.127 --rc genhtml_branch_coverage=1 00:04:10.127 --rc genhtml_function_coverage=1 00:04:10.127 --rc genhtml_legend=1 00:04:10.127 --rc geninfo_all_blocks=1 00:04:10.127 --rc geninfo_unexecuted_blocks=1 00:04:10.127 00:04:10.127 ' 00:04:10.127 02:20:50 -- setup/test-setup.sh@10 -- # uname -s 00:04:10.127 02:20:50 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:10.127 02:20:50 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:10.127 02:20:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.127 02:20:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.127 02:20:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.127 ************************************ 00:04:10.127 START TEST acl 00:04:10.127 ************************************ 00:04:10.127 02:20:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:10.386 * Looking for test storage... 00:04:10.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.386 02:20:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:10.386 02:20:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:10.386 02:20:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:10.386 02:20:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:10.386 02:20:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:10.386 02:20:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:10.386 02:20:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:10.386 02:20:50 -- scripts/common.sh@335 -- # IFS=.-: 00:04:10.386 02:20:50 -- scripts/common.sh@335 -- # read -ra ver1 00:04:10.386 02:20:50 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.386 02:20:50 -- scripts/common.sh@336 -- # read -ra ver2 00:04:10.386 02:20:50 -- scripts/common.sh@337 -- # local 'op=<' 00:04:10.386 02:20:50 -- scripts/common.sh@339 -- # ver1_l=2 00:04:10.386 02:20:50 -- scripts/common.sh@340 -- # ver2_l=1 00:04:10.386 02:20:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:10.386 02:20:50 -- scripts/common.sh@343 -- # case "$op" in 00:04:10.386 02:20:50 -- scripts/common.sh@344 -- # : 1 00:04:10.386 02:20:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:10.386 02:20:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.386 02:20:50 -- scripts/common.sh@364 -- # decimal 1 00:04:10.386 02:20:50 -- scripts/common.sh@352 -- # local d=1 00:04:10.386 02:20:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.386 02:20:50 -- scripts/common.sh@354 -- # echo 1 00:04:10.386 02:20:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:10.386 02:20:50 -- scripts/common.sh@365 -- # decimal 2 00:04:10.386 02:20:50 -- scripts/common.sh@352 -- # local d=2 00:04:10.386 02:20:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.386 02:20:50 -- scripts/common.sh@354 -- # echo 2 00:04:10.386 02:20:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:10.386 02:20:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:10.386 02:20:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:10.386 02:20:50 -- scripts/common.sh@367 -- # return 0 00:04:10.386 02:20:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.386 02:20:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:10.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.386 --rc genhtml_branch_coverage=1 00:04:10.386 --rc genhtml_function_coverage=1 00:04:10.386 --rc genhtml_legend=1 00:04:10.386 --rc geninfo_all_blocks=1 00:04:10.387 --rc geninfo_unexecuted_blocks=1 00:04:10.387 00:04:10.387 ' 00:04:10.387 02:20:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.387 --rc genhtml_branch_coverage=1 00:04:10.387 --rc genhtml_function_coverage=1 00:04:10.387 --rc genhtml_legend=1 00:04:10.387 --rc geninfo_all_blocks=1 00:04:10.387 --rc geninfo_unexecuted_blocks=1 00:04:10.387 00:04:10.387 ' 00:04:10.387 02:20:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.387 --rc genhtml_branch_coverage=1 00:04:10.387 --rc genhtml_function_coverage=1 00:04:10.387 --rc genhtml_legend=1 00:04:10.387 --rc geninfo_all_blocks=1 00:04:10.387 --rc geninfo_unexecuted_blocks=1 00:04:10.387 00:04:10.387 ' 00:04:10.387 02:20:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:10.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.387 --rc genhtml_branch_coverage=1 00:04:10.387 --rc genhtml_function_coverage=1 00:04:10.387 --rc genhtml_legend=1 00:04:10.387 --rc geninfo_all_blocks=1 00:04:10.387 --rc geninfo_unexecuted_blocks=1 00:04:10.387 00:04:10.387 ' 00:04:10.387 02:20:50 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:10.387 02:20:50 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:10.387 02:20:50 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:10.387 02:20:50 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:10.387 02:20:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.387 02:20:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:10.387 02:20:50 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:10.387 02:20:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.387 02:20:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:10.387 02:20:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:10.387 02:20:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.387 02:20:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:10.387 02:20:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:10.387 02:20:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.387 02:20:50 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:10.387 02:20:50 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:10.387 02:20:50 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:10.387 02:20:50 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.387 02:20:50 -- setup/acl.sh@12 -- # devs=() 00:04:10.387 02:20:50 -- setup/acl.sh@12 -- # declare -a devs 00:04:10.387 02:20:50 -- setup/acl.sh@13 -- # drivers=() 00:04:10.387 02:20:50 -- setup/acl.sh@13 -- # declare -A drivers 00:04:10.387 02:20:50 -- setup/acl.sh@51 -- # setup reset 00:04:10.387 02:20:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.387 02:20:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.324 02:20:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:11.324 02:20:51 -- setup/acl.sh@16 -- # local dev driver 00:04:11.324 02:20:51 -- setup/acl.sh@15 -- # setup output status 00:04:11.324 02:20:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.324 02:20:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.324 02:20:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:11.324 Hugepages 00:04:11.324 node hugesize free / total 00:04:11.324 02:20:51 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:11.324 02:20:51 -- setup/acl.sh@19 -- # continue 00:04:11.324 02:20:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.324 00:04:11.324 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.324 02:20:51 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:11.324 02:20:51 -- setup/acl.sh@19 -- # continue 00:04:11.324 02:20:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.324 02:20:51 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:11.324 02:20:51 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:11.324 02:20:51 -- setup/acl.sh@20 -- # continue 00:04:11.324 02:20:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.581 02:20:51 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:11.581 02:20:51 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:11.581 02:20:51 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:11.581 02:20:51 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:11.581 02:20:51 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:11.581 02:20:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.581 02:20:52 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:11.581 02:20:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:11.581 02:20:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:11.581 02:20:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:11.581 02:20:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:11.581 02:20:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.581 02:20:52 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:11.581 02:20:52 -- setup/acl.sh@54 -- # run_test denied denied 00:04:11.581 02:20:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.581 02:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.581 02:20:52 -- common/autotest_common.sh@10 -- # set +x 00:04:11.581 ************************************ 00:04:11.581 START TEST denied 00:04:11.581 ************************************ 00:04:11.581 02:20:52 -- common/autotest_common.sh@1114 -- # denied 00:04:11.581 02:20:52 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:11.581 02:20:52 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:11.581 02:20:52 -- setup/acl.sh@38 -- # setup output config 00:04:11.581 02:20:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.581 02:20:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.514 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:12.514 02:20:52 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:12.514 02:20:52 -- setup/acl.sh@28 -- # local dev driver 00:04:12.514 02:20:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:12.514 02:20:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:12.514 02:20:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:12.514 02:20:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:12.514 02:20:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:12.514 02:20:52 -- setup/acl.sh@41 -- # setup reset 00:04:12.514 02:20:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.514 02:20:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.079 00:04:13.079 real 0m1.451s 00:04:13.079 user 0m0.609s 00:04:13.079 sys 0m0.810s 00:04:13.079 ************************************ 00:04:13.079 END TEST denied 00:04:13.079 ************************************ 00:04:13.079 02:20:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:13.079 02:20:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.079 02:20:53 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:13.079 02:20:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.079 02:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.079 02:20:53 -- common/autotest_common.sh@10 -- # set +x 00:04:13.079 ************************************ 00:04:13.079 START TEST allowed 00:04:13.079 ************************************ 00:04:13.079 02:20:53 -- common/autotest_common.sh@1114 -- # allowed 00:04:13.079 02:20:53 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:13.079 02:20:53 -- setup/acl.sh@45 -- # setup output config 00:04:13.079 02:20:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.079 02:20:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.079 02:20:53 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:14.013 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:14.013 02:20:54 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:14.013 02:20:54 -- setup/acl.sh@28 -- # local dev driver 00:04:14.013 02:20:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:14.013 02:20:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:14.013 02:20:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:14.013 02:20:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:14.013 02:20:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:14.013 02:20:54 -- setup/acl.sh@48 -- # setup reset 00:04:14.013 02:20:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.013 02:20:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.580 00:04:14.580 real 0m1.548s 00:04:14.580 user 0m0.698s 00:04:14.580 sys 0m0.853s 00:04:14.580 02:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.580 ************************************ 00:04:14.580 END TEST allowed 00:04:14.580 ************************************ 00:04:14.580 02:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:14.580 00:04:14.580 real 0m4.469s 00:04:14.580 user 0m2.009s 00:04:14.580 sys 0m2.453s 00:04:14.580 02:20:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.580 02:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:14.580 ************************************ 00:04:14.580 END TEST acl 00:04:14.580 ************************************ 00:04:14.580 02:20:55 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:14.580 02:20:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.580 02:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.580 02:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:14.580 ************************************ 00:04:14.580 START TEST hugepages 00:04:14.580 ************************************ 00:04:14.580 02:20:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:14.840 * Looking for test storage... 00:04:14.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.840 02:20:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:14.840 02:20:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:14.840 02:20:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:14.840 02:20:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:14.840 02:20:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:14.840 02:20:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:14.840 02:20:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:14.840 02:20:55 -- scripts/common.sh@335 -- # IFS=.-: 00:04:14.840 02:20:55 -- scripts/common.sh@335 -- # read -ra ver1 00:04:14.840 02:20:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.840 02:20:55 -- scripts/common.sh@336 -- # read -ra ver2 00:04:14.840 02:20:55 -- scripts/common.sh@337 -- # local 'op=<' 00:04:14.840 02:20:55 -- scripts/common.sh@339 -- # ver1_l=2 00:04:14.840 02:20:55 -- scripts/common.sh@340 -- # ver2_l=1 00:04:14.840 02:20:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:14.840 02:20:55 -- scripts/common.sh@343 -- # case "$op" in 00:04:14.840 02:20:55 -- scripts/common.sh@344 -- # : 1 00:04:14.840 02:20:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:14.840 02:20:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.840 02:20:55 -- scripts/common.sh@364 -- # decimal 1 00:04:14.840 02:20:55 -- scripts/common.sh@352 -- # local d=1 00:04:14.840 02:20:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.840 02:20:55 -- scripts/common.sh@354 -- # echo 1 00:04:14.840 02:20:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:14.840 02:20:55 -- scripts/common.sh@365 -- # decimal 2 00:04:14.840 02:20:55 -- scripts/common.sh@352 -- # local d=2 00:04:14.840 02:20:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.841 02:20:55 -- scripts/common.sh@354 -- # echo 2 00:04:14.841 02:20:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:14.841 02:20:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:14.841 02:20:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:14.841 02:20:55 -- scripts/common.sh@367 -- # return 0 00:04:14.841 02:20:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.841 02:20:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:14.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.841 --rc genhtml_branch_coverage=1 00:04:14.841 --rc genhtml_function_coverage=1 00:04:14.841 --rc genhtml_legend=1 00:04:14.841 --rc geninfo_all_blocks=1 00:04:14.841 --rc geninfo_unexecuted_blocks=1 00:04:14.841 00:04:14.841 ' 00:04:14.841 02:20:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:14.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.841 --rc genhtml_branch_coverage=1 00:04:14.841 --rc genhtml_function_coverage=1 00:04:14.841 --rc genhtml_legend=1 00:04:14.841 --rc geninfo_all_blocks=1 00:04:14.841 --rc geninfo_unexecuted_blocks=1 00:04:14.841 00:04:14.841 ' 00:04:14.841 02:20:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:14.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.841 --rc genhtml_branch_coverage=1 00:04:14.841 --rc genhtml_function_coverage=1 00:04:14.841 --rc genhtml_legend=1 00:04:14.841 --rc geninfo_all_blocks=1 00:04:14.841 --rc geninfo_unexecuted_blocks=1 00:04:14.841 00:04:14.841 ' 00:04:14.841 02:20:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:14.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.841 --rc genhtml_branch_coverage=1 00:04:14.841 --rc genhtml_function_coverage=1 00:04:14.841 --rc genhtml_legend=1 00:04:14.841 --rc geninfo_all_blocks=1 00:04:14.841 --rc geninfo_unexecuted_blocks=1 00:04:14.841 00:04:14.841 ' 00:04:14.841 02:20:55 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:14.841 02:20:55 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:14.841 02:20:55 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:14.841 02:20:55 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:14.841 02:20:55 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:14.841 02:20:55 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:14.841 02:20:55 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:14.841 02:20:55 -- setup/common.sh@18 -- # local node= 00:04:14.841 02:20:55 -- setup/common.sh@19 -- # local var val 00:04:14.841 02:20:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.841 02:20:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.841 02:20:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.841 02:20:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.841 02:20:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.841 02:20:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 5830060 kB' 'MemAvailable: 7341176 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 496316 kB' 'Inactive: 1344872 kB' 'Active(anon): 127160 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 118296 kB' 'Mapped: 50640 kB' 'Shmem: 10508 kB' 'KReclaimable: 68068 kB' 'Slab: 163124 kB' 'SReclaimable: 68068 kB' 'SUnreclaim: 95056 kB' 'KernelStack: 6368 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 321264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.841 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.841 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # continue 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.842 02:20:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.842 02:20:55 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.842 02:20:55 -- setup/common.sh@33 -- # echo 2048 00:04:14.842 02:20:55 -- setup/common.sh@33 -- # return 0 00:04:14.842 02:20:55 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:14.842 02:20:55 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:14.842 02:20:55 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:14.842 02:20:55 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:14.842 02:20:55 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:14.842 02:20:55 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:14.842 02:20:55 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:14.842 02:20:55 -- setup/hugepages.sh@207 -- # get_nodes 00:04:14.842 02:20:55 -- setup/hugepages.sh@27 -- # local node 00:04:14.842 02:20:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.842 02:20:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:14.842 02:20:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.842 02:20:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.842 02:20:55 -- setup/hugepages.sh@208 -- # clear_hp 00:04:14.842 02:20:55 -- setup/hugepages.sh@37 -- # local node hp 00:04:14.842 02:20:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.842 02:20:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.842 02:20:55 -- setup/hugepages.sh@41 -- # echo 0 00:04:14.842 02:20:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.842 02:20:55 -- setup/hugepages.sh@41 -- # echo 0 00:04:14.842 02:20:55 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.842 02:20:55 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.842 02:20:55 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:14.842 02:20:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.842 02:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.842 02:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:14.842 ************************************ 00:04:14.842 START TEST default_setup 00:04:14.842 ************************************ 00:04:14.842 02:20:55 -- common/autotest_common.sh@1114 -- # default_setup 00:04:14.842 02:20:55 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:14.842 02:20:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.842 02:20:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.842 02:20:55 -- setup/hugepages.sh@51 -- # shift 00:04:14.842 02:20:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.842 02:20:55 -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.842 02:20:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.842 02:20:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.842 02:20:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.842 02:20:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.842 02:20:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.842 02:20:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.842 02:20:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:14.842 02:20:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.843 02:20:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.843 02:20:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.843 02:20:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.843 02:20:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.843 02:20:55 -- setup/hugepages.sh@73 -- # return 0 00:04:14.843 02:20:55 -- setup/hugepages.sh@137 -- # setup output 00:04:14.843 02:20:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.843 02:20:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.780 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.780 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.780 02:20:56 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:15.780 02:20:56 -- setup/hugepages.sh@89 -- # local node 00:04:15.780 02:20:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.780 02:20:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.780 02:20:56 -- setup/hugepages.sh@92 -- # local surp 00:04:15.780 02:20:56 -- setup/hugepages.sh@93 -- # local resv 00:04:15.780 02:20:56 -- setup/hugepages.sh@94 -- # local anon 00:04:15.780 02:20:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.780 02:20:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.780 02:20:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.780 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:15.780 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:15.780 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.780 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.780 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.780 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.780 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.780 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.780 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.780 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7942000 kB' 'MemAvailable: 9452988 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498312 kB' 'Inactive: 1344876 kB' 'Active(anon): 129156 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120296 kB' 'Mapped: 50736 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162888 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95084 kB' 'KernelStack: 6368 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.781 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.781 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.782 02:20:56 -- setup/common.sh@33 -- # echo 0 00:04:15.782 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:15.782 02:20:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:15.782 02:20:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.782 02:20:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.782 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:15.782 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:15.782 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.782 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.782 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.782 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.782 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.782 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7942104 kB' 'MemAvailable: 9453104 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498008 kB' 'Inactive: 1344888 kB' 'Active(anon): 128852 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344888 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119912 kB' 'Mapped: 50672 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162888 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95084 kB' 'KernelStack: 6320 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.782 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.782 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.783 02:20:56 -- setup/common.sh@33 -- # echo 0 00:04:15.783 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:15.783 02:20:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:15.783 02:20:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.783 02:20:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.783 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:15.783 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:15.783 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.783 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.783 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.783 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.783 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.783 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7942364 kB' 'MemAvailable: 9453364 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497812 kB' 'Inactive: 1344888 kB' 'Active(anon): 128656 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344888 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50672 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162888 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95084 kB' 'KernelStack: 6320 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.783 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.783 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.784 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.784 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.785 02:20:56 -- setup/common.sh@33 -- # echo 0 00:04:15.785 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:15.785 02:20:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:15.785 nr_hugepages=1024 00:04:15.785 02:20:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.785 resv_hugepages=0 00:04:15.785 02:20:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.785 surplus_hugepages=0 00:04:15.785 02:20:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.785 anon_hugepages=0 00:04:15.785 02:20:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.785 02:20:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.785 02:20:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.785 02:20:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.785 02:20:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.785 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:15.785 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:15.785 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:15.785 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.785 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.785 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.785 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.785 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.785 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7942884 kB' 'MemAvailable: 9453884 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498072 kB' 'Inactive: 1344888 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344888 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119976 kB' 'Mapped: 50672 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162888 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95084 kB' 'KernelStack: 6320 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # continue 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:15.785 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.785 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.044 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.044 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.045 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.045 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.045 02:20:56 -- setup/common.sh@33 -- # echo 1024 00:04:16.045 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:16.046 02:20:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.046 02:20:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.046 02:20:56 -- setup/hugepages.sh@27 -- # local node 00:04:16.046 02:20:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.046 02:20:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.046 02:20:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.046 02:20:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.046 02:20:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.046 02:20:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.046 02:20:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.046 02:20:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.046 02:20:56 -- setup/common.sh@18 -- # local node=0 00:04:16.046 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:16.046 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.046 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.046 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.046 02:20:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.046 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.046 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7943064 kB' 'MemUsed: 4296056 kB' 'SwapCached: 0 kB' 'Active: 497768 kB' 'Inactive: 1344888 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344888 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 1724532 kB' 'Mapped: 50620 kB' 'AnonPages: 119676 kB' 'Shmem: 10484 kB' 'KernelStack: 6340 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 162888 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.046 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.046 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.047 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.047 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.047 02:20:56 -- setup/common.sh@33 -- # echo 0 00:04:16.047 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:16.047 02:20:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.047 02:20:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.047 02:20:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.047 02:20:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.047 node0=1024 expecting 1024 00:04:16.047 02:20:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.047 02:20:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.047 00:04:16.047 real 0m0.999s 00:04:16.047 user 0m0.460s 00:04:16.047 sys 0m0.481s 00:04:16.047 02:20:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.047 02:20:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.047 ************************************ 00:04:16.047 END TEST default_setup 00:04:16.047 ************************************ 00:04:16.047 02:20:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:16.047 02:20:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.047 02:20:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.047 02:20:56 -- common/autotest_common.sh@10 -- # set +x 00:04:16.047 ************************************ 00:04:16.047 START TEST per_node_1G_alloc 00:04:16.047 ************************************ 00:04:16.047 02:20:56 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:16.047 02:20:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:16.047 02:20:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:16.047 02:20:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.047 02:20:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:16.047 02:20:56 -- setup/hugepages.sh@51 -- # shift 00:04:16.047 02:20:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:16.047 02:20:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.047 02:20:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.047 02:20:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.047 02:20:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:16.047 02:20:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:16.047 02:20:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.047 02:20:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.047 02:20:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.047 02:20:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.047 02:20:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.047 02:20:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:16.047 02:20:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.047 02:20:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.047 02:20:56 -- setup/hugepages.sh@73 -- # return 0 00:04:16.047 02:20:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:16.047 02:20:56 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:16.047 02:20:56 -- setup/hugepages.sh@146 -- # setup output 00:04:16.047 02:20:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.047 02:20:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.306 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.306 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:16.306 02:20:56 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:16.306 02:20:56 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:16.306 02:20:56 -- setup/hugepages.sh@89 -- # local node 00:04:16.306 02:20:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.306 02:20:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.306 02:20:56 -- setup/hugepages.sh@92 -- # local surp 00:04:16.306 02:20:56 -- setup/hugepages.sh@93 -- # local resv 00:04:16.306 02:20:56 -- setup/hugepages.sh@94 -- # local anon 00:04:16.306 02:20:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.306 02:20:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.306 02:20:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.306 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:16.306 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:16.306 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.306 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.306 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.306 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.306 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.306 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.306 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.306 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8996792 kB' 'MemAvailable: 10507796 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498596 kB' 'Inactive: 1344892 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120600 kB' 'Mapped: 50744 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162896 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95092 kB' 'KernelStack: 6376 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.307 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.307 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.308 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.308 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.308 02:20:56 -- setup/common.sh@33 -- # echo 0 00:04:16.308 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:16.308 02:20:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:16.308 02:20:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.308 02:20:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.308 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:16.308 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:16.308 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.308 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.308 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.308 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.308 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.308 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8996792 kB' 'MemAvailable: 10507796 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498208 kB' 'Inactive: 1344892 kB' 'Active(anon): 129052 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120116 kB' 'Mapped: 50708 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162896 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95092 kB' 'KernelStack: 6328 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.570 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.570 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.571 02:20:56 -- setup/common.sh@33 -- # echo 0 00:04:16.571 02:20:56 -- setup/common.sh@33 -- # return 0 00:04:16.571 02:20:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:16.571 02:20:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.571 02:20:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.571 02:20:56 -- setup/common.sh@18 -- # local node= 00:04:16.571 02:20:56 -- setup/common.sh@19 -- # local var val 00:04:16.571 02:20:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.571 02:20:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.571 02:20:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.571 02:20:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.571 02:20:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.571 02:20:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8997244 kB' 'MemAvailable: 10508248 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497660 kB' 'Inactive: 1344892 kB' 'Active(anon): 128504 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 50620 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162936 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95132 kB' 'KernelStack: 6320 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.571 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.571 02:20:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:56 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.572 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.572 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.572 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.572 02:20:57 -- setup/common.sh@33 -- # echo 0 00:04:16.572 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:16.572 02:20:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:16.572 nr_hugepages=512 00:04:16.572 02:20:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:16.572 resv_hugepages=0 00:04:16.572 02:20:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.572 surplus_hugepages=0 00:04:16.572 02:20:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.572 anon_hugepages=0 00:04:16.572 02:20:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.572 02:20:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:16.572 02:20:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:16.572 02:20:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.572 02:20:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.572 02:20:57 -- setup/common.sh@18 -- # local node= 00:04:16.572 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:16.573 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.573 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.573 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.573 02:20:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.573 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.573 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8997764 kB' 'MemAvailable: 10508768 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497920 kB' 'Inactive: 1344892 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119828 kB' 'Mapped: 50620 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 162936 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95132 kB' 'KernelStack: 6320 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.573 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.573 02:20:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.574 02:20:57 -- setup/common.sh@33 -- # echo 512 00:04:16.574 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:16.574 02:20:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:16.574 02:20:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.574 02:20:57 -- setup/hugepages.sh@27 -- # local node 00:04:16.574 02:20:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.574 02:20:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.574 02:20:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:16.574 02:20:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.574 02:20:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.574 02:20:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.574 02:20:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.574 02:20:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.574 02:20:57 -- setup/common.sh@18 -- # local node=0 00:04:16.574 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:16.574 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:16.574 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.574 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.574 02:20:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.574 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.574 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8998240 kB' 'MemUsed: 3240880 kB' 'SwapCached: 0 kB' 'Active: 497696 kB' 'Inactive: 1344892 kB' 'Active(anon): 128540 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 1724532 kB' 'Mapped: 50620 kB' 'AnonPages: 119644 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 162936 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.574 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.574 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # continue 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:16.575 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:16.575 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.575 02:20:57 -- setup/common.sh@33 -- # echo 0 00:04:16.575 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:16.575 02:20:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.575 02:20:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.575 02:20:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.575 node0=512 expecting 512 00:04:16.575 02:20:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:16.575 02:20:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:16.575 00:04:16.575 real 0m0.533s 00:04:16.575 user 0m0.278s 00:04:16.575 sys 0m0.291s 00:04:16.575 02:20:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:16.575 02:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:16.575 ************************************ 00:04:16.575 END TEST per_node_1G_alloc 00:04:16.575 ************************************ 00:04:16.575 02:20:57 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:16.575 02:20:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.575 02:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.575 02:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:16.575 ************************************ 00:04:16.575 START TEST even_2G_alloc 00:04:16.575 ************************************ 00:04:16.575 02:20:57 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:16.575 02:20:57 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:16.575 02:20:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:16.575 02:20:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:16.575 02:20:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.575 02:20:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.575 02:20:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.575 02:20:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:16.575 02:20:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:16.575 02:20:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.575 02:20:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.575 02:20:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:16.575 02:20:57 -- setup/hugepages.sh@83 -- # : 0 00:04:16.575 02:20:57 -- setup/hugepages.sh@84 -- # : 0 00:04:16.575 02:20:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.575 02:20:57 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:16.575 02:20:57 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:16.575 02:20:57 -- setup/hugepages.sh@153 -- # setup output 00:04:16.575 02:20:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.575 02:20:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:16.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.097 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.097 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.097 02:20:57 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:17.097 02:20:57 -- setup/hugepages.sh@89 -- # local node 00:04:17.097 02:20:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.097 02:20:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.097 02:20:57 -- setup/hugepages.sh@92 -- # local surp 00:04:17.097 02:20:57 -- setup/hugepages.sh@93 -- # local resv 00:04:17.097 02:20:57 -- setup/hugepages.sh@94 -- # local anon 00:04:17.097 02:20:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.097 02:20:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.097 02:20:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.097 02:20:57 -- setup/common.sh@18 -- # local node= 00:04:17.097 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:17.097 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.097 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.097 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.097 02:20:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.097 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.097 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7956396 kB' 'MemAvailable: 9467400 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498312 kB' 'Inactive: 1344892 kB' 'Active(anon): 129156 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120256 kB' 'Mapped: 50688 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163024 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95220 kB' 'KernelStack: 6376 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.097 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.097 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.098 02:20:57 -- setup/common.sh@33 -- # echo 0 00:04:17.098 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:17.098 02:20:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.098 02:20:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.098 02:20:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.098 02:20:57 -- setup/common.sh@18 -- # local node= 00:04:17.098 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:17.098 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.098 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.098 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.098 02:20:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.098 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.098 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7956396 kB' 'MemAvailable: 9467400 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497888 kB' 'Inactive: 1344892 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 50740 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163020 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95216 kB' 'KernelStack: 6296 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.098 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.098 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.099 02:20:57 -- setup/common.sh@33 -- # echo 0 00:04:17.099 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:17.099 02:20:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.099 02:20:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.099 02:20:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.099 02:20:57 -- setup/common.sh@18 -- # local node= 00:04:17.099 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:17.099 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.099 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.099 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.099 02:20:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.099 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.099 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7958744 kB' 'MemAvailable: 9469748 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497800 kB' 'Inactive: 1344892 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 50620 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163024 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95220 kB' 'KernelStack: 6352 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.099 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.099 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.100 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.100 02:20:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.101 02:20:57 -- setup/common.sh@33 -- # echo 0 00:04:17.101 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:17.101 02:20:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.101 nr_hugepages=1024 00:04:17.101 02:20:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.101 resv_hugepages=0 00:04:17.101 02:20:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.101 surplus_hugepages=0 00:04:17.101 02:20:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.101 anon_hugepages=0 00:04:17.101 02:20:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.101 02:20:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.101 02:20:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.101 02:20:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.101 02:20:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.101 02:20:57 -- setup/common.sh@18 -- # local node= 00:04:17.101 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:17.101 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.101 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.101 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.101 02:20:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.101 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.101 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7958728 kB' 'MemAvailable: 9469732 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497996 kB' 'Inactive: 1344892 kB' 'Active(anon): 128840 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119932 kB' 'Mapped: 50620 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163020 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95216 kB' 'KernelStack: 6352 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.101 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.101 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.102 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.102 02:20:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.103 02:20:57 -- setup/common.sh@33 -- # echo 1024 00:04:17.103 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:17.103 02:20:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.103 02:20:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.103 02:20:57 -- setup/hugepages.sh@27 -- # local node 00:04:17.103 02:20:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.103 02:20:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.103 02:20:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.103 02:20:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.103 02:20:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.103 02:20:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.103 02:20:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.103 02:20:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.103 02:20:57 -- setup/common.sh@18 -- # local node=0 00:04:17.103 02:20:57 -- setup/common.sh@19 -- # local var val 00:04:17.103 02:20:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.103 02:20:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.103 02:20:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.103 02:20:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.103 02:20:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.103 02:20:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7958728 kB' 'MemUsed: 4280392 kB' 'SwapCached: 0 kB' 'Active: 497684 kB' 'Inactive: 1344892 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 1724532 kB' 'Mapped: 50620 kB' 'AnonPages: 119660 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 163020 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.103 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.103 02:20:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # continue 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.104 02:20:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.104 02:20:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.104 02:20:57 -- setup/common.sh@33 -- # echo 0 00:04:17.104 02:20:57 -- setup/common.sh@33 -- # return 0 00:04:17.104 02:20:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.104 02:20:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.104 02:20:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.104 node0=1024 expecting 1024 00:04:17.104 02:20:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.104 02:20:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.104 00:04:17.104 real 0m0.564s 00:04:17.104 user 0m0.271s 00:04:17.104 sys 0m0.321s 00:04:17.104 02:20:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.104 02:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.104 ************************************ 00:04:17.104 END TEST even_2G_alloc 00:04:17.104 ************************************ 00:04:17.104 02:20:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:17.104 02:20:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.104 02:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.104 02:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.104 ************************************ 00:04:17.104 START TEST odd_alloc 00:04:17.104 ************************************ 00:04:17.104 02:20:57 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:17.104 02:20:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:17.104 02:20:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:17.104 02:20:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:17.104 02:20:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.104 02:20:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.104 02:20:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.104 02:20:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:17.104 02:20:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.104 02:20:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.104 02:20:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.104 02:20:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:17.104 02:20:57 -- setup/hugepages.sh@83 -- # : 0 00:04:17.104 02:20:57 -- setup/hugepages.sh@84 -- # : 0 00:04:17.104 02:20:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.104 02:20:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:17.104 02:20:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:17.104 02:20:57 -- setup/hugepages.sh@160 -- # setup output 00:04:17.104 02:20:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.104 02:20:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.675 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.675 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:17.675 02:20:58 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:17.675 02:20:58 -- setup/hugepages.sh@89 -- # local node 00:04:17.675 02:20:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.675 02:20:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.675 02:20:58 -- setup/hugepages.sh@92 -- # local surp 00:04:17.675 02:20:58 -- setup/hugepages.sh@93 -- # local resv 00:04:17.675 02:20:58 -- setup/hugepages.sh@94 -- # local anon 00:04:17.675 02:20:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.675 02:20:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.675 02:20:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.675 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:17.675 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:17.675 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.675 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.675 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.675 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.675 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.675 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7964876 kB' 'MemAvailable: 9475880 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498492 kB' 'Inactive: 1344892 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120480 kB' 'Mapped: 51112 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163040 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95236 kB' 'KernelStack: 6412 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 325316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.675 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.676 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:17.676 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:17.676 02:20:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.676 02:20:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.676 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.676 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:17.676 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:17.676 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.676 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.676 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.676 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.676 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.676 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7965136 kB' 'MemAvailable: 9476140 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498080 kB' 'Inactive: 1344892 kB' 'Active(anon): 128924 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 120036 kB' 'Mapped: 50852 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163040 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95236 kB' 'KernelStack: 6348 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.677 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:17.677 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:17.677 02:20:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.677 02:20:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.677 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.677 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:17.678 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:17.678 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.678 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.678 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.678 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.678 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.678 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7964884 kB' 'MemAvailable: 9475888 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497912 kB' 'Inactive: 1344892 kB' 'Active(anon): 128756 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 50676 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163036 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95232 kB' 'KernelStack: 6372 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.678 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.679 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:17.679 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:17.679 02:20:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.679 nr_hugepages=1025 00:04:17.679 02:20:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:17.679 resv_hugepages=0 00:04:17.679 02:20:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.679 surplus_hugepages=0 00:04:17.679 02:20:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.679 anon_hugepages=0 00:04:17.679 02:20:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.679 02:20:58 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.679 02:20:58 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:17.679 02:20:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.679 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.679 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:17.679 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:17.679 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.679 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.679 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.679 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.679 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.679 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7964884 kB' 'MemAvailable: 9475888 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497912 kB' 'Inactive: 1344892 kB' 'Active(anon): 128756 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 50676 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163036 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95232 kB' 'KernelStack: 6372 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.679 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 02:20:58 -- setup/common.sh@33 -- # echo 1025 00:04:17.680 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:17.680 02:20:58 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.680 02:20:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.681 02:20:58 -- setup/hugepages.sh@27 -- # local node 00:04:17.681 02:20:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.681 02:20:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:17.681 02:20:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:17.681 02:20:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.681 02:20:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.681 02:20:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.681 02:20:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.681 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.681 02:20:58 -- setup/common.sh@18 -- # local node=0 00:04:17.681 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:17.681 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.681 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.681 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.681 02:20:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.681 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.681 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7964884 kB' 'MemUsed: 4274236 kB' 'SwapCached: 0 kB' 'Active: 497868 kB' 'Inactive: 1344892 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 324 kB' 'Writeback: 0 kB' 'FilePages: 1724532 kB' 'Mapped: 50676 kB' 'AnonPages: 119768 kB' 'Shmem: 10484 kB' 'KernelStack: 6408 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 163036 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.682 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.682 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.682 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.682 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # continue 00:04:17.682 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.682 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.682 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.682 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:17.682 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:17.682 02:20:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.682 02:20:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.682 02:20:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.682 02:20:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.682 node0=1025 expecting 1025 00:04:17.682 02:20:58 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:17.682 02:20:58 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:17.682 00:04:17.682 real 0m0.535s 00:04:17.682 user 0m0.253s 00:04:17.682 sys 0m0.317s 00:04:17.682 02:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:17.682 02:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:17.682 ************************************ 00:04:17.682 END TEST odd_alloc 00:04:17.682 ************************************ 00:04:17.682 02:20:58 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:17.682 02:20:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.682 02:20:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.682 02:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:17.940 ************************************ 00:04:17.940 START TEST custom_alloc 00:04:17.940 ************************************ 00:04:17.940 02:20:58 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:17.940 02:20:58 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:17.940 02:20:58 -- setup/hugepages.sh@169 -- # local node 00:04:17.940 02:20:58 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:17.940 02:20:58 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:17.940 02:20:58 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:17.940 02:20:58 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:17.940 02:20:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:17.940 02:20:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:17.940 02:20:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.940 02:20:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.940 02:20:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.940 02:20:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.940 02:20:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.940 02:20:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.940 02:20:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.940 02:20:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.940 02:20:58 -- setup/hugepages.sh@83 -- # : 0 00:04:17.940 02:20:58 -- setup/hugepages.sh@84 -- # : 0 00:04:17.940 02:20:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:17.940 02:20:58 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.940 02:20:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.940 02:20:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.940 02:20:58 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:17.940 02:20:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.940 02:20:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.940 02:20:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.940 02:20:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:17.941 02:20:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.941 02:20:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.941 02:20:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.941 02:20:58 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:17.941 02:20:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.941 02:20:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.941 02:20:58 -- setup/hugepages.sh@78 -- # return 0 00:04:17.941 02:20:58 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:17.941 02:20:58 -- setup/hugepages.sh@187 -- # setup output 00:04:17.941 02:20:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.941 02:20:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.206 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.206 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.206 02:20:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:18.206 02:20:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:18.206 02:20:58 -- setup/hugepages.sh@89 -- # local node 00:04:18.206 02:20:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.206 02:20:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.206 02:20:58 -- setup/hugepages.sh@92 -- # local surp 00:04:18.206 02:20:58 -- setup/hugepages.sh@93 -- # local resv 00:04:18.206 02:20:58 -- setup/hugepages.sh@94 -- # local anon 00:04:18.206 02:20:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.206 02:20:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.206 02:20:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.206 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:18.206 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:18.206 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.206 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.206 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.206 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.206 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.206 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9016536 kB' 'MemAvailable: 10527540 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498216 kB' 'Inactive: 1344892 kB' 'Active(anon): 129060 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120404 kB' 'Mapped: 50656 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163044 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95240 kB' 'KernelStack: 6328 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.206 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.206 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.207 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:18.207 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:18.207 02:20:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:18.207 02:20:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.207 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.207 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:18.207 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:18.207 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.207 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.207 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.207 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.207 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.207 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9016796 kB' 'MemAvailable: 10527800 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498452 kB' 'Inactive: 1344892 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120148 kB' 'Mapped: 50712 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163056 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95252 kB' 'KernelStack: 6392 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.207 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.207 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.208 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.208 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.209 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:18.209 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:18.209 02:20:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:18.209 02:20:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.209 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.209 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:18.209 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:18.209 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.209 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.209 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.209 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.209 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.209 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9016992 kB' 'MemAvailable: 10527996 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 498100 kB' 'Inactive: 1344892 kB' 'Active(anon): 128944 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120060 kB' 'Mapped: 50712 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163032 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95228 kB' 'KernelStack: 6328 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.209 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.209 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.210 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.210 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.211 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:18.211 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:18.211 02:20:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:18.211 nr_hugepages=512 00:04:18.211 02:20:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:18.211 resv_hugepages=0 00:04:18.211 02:20:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.211 02:20:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.211 surplus_hugepages=0 00:04:18.211 anon_hugepages=0 00:04:18.211 02:20:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.211 02:20:58 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:18.211 02:20:58 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:18.211 02:20:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.211 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.211 02:20:58 -- setup/common.sh@18 -- # local node= 00:04:18.211 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:18.211 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.211 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.211 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.211 02:20:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.211 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.211 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9016992 kB' 'MemAvailable: 10527996 kB' 'Buffers: 2684 kB' 'Cached: 1721848 kB' 'SwapCached: 0 kB' 'Active: 497888 kB' 'Inactive: 1344892 kB' 'Active(anon): 128732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50624 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163036 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95232 kB' 'KernelStack: 6336 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 322376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.211 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.211 02:20:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.212 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.212 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.499 02:20:58 -- setup/common.sh@33 -- # echo 512 00:04:18.499 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:18.499 02:20:58 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:18.499 02:20:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.499 02:20:58 -- setup/hugepages.sh@27 -- # local node 00:04:18.499 02:20:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.499 02:20:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.499 02:20:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:18.499 02:20:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.499 02:20:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.499 02:20:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.499 02:20:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.499 02:20:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.499 02:20:58 -- setup/common.sh@18 -- # local node=0 00:04:18.499 02:20:58 -- setup/common.sh@19 -- # local var val 00:04:18.499 02:20:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.499 02:20:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.499 02:20:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.499 02:20:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.499 02:20:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.499 02:20:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9016992 kB' 'MemUsed: 3222128 kB' 'SwapCached: 0 kB' 'Active: 497836 kB' 'Inactive: 1344892 kB' 'Active(anon): 128680 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724532 kB' 'Mapped: 50624 kB' 'AnonPages: 119804 kB' 'Shmem: 10484 kB' 'KernelStack: 6320 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 163036 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.499 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.499 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # continue 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.500 02:20:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.500 02:20:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.500 02:20:58 -- setup/common.sh@33 -- # echo 0 00:04:18.500 02:20:58 -- setup/common.sh@33 -- # return 0 00:04:18.500 02:20:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.500 02:20:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.500 02:20:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.500 02:20:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.500 node0=512 expecting 512 00:04:18.500 02:20:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.500 02:20:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.500 00:04:18.500 real 0m0.553s 00:04:18.500 user 0m0.260s 00:04:18.500 sys 0m0.332s 00:04:18.500 02:20:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:18.500 02:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:18.500 ************************************ 00:04:18.500 END TEST custom_alloc 00:04:18.500 ************************************ 00:04:18.500 02:20:58 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.500 02:20:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.500 02:20:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.500 02:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:18.500 ************************************ 00:04:18.500 START TEST no_shrink_alloc 00:04:18.500 ************************************ 00:04:18.500 02:20:58 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:18.500 02:20:58 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:18.500 02:20:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.500 02:20:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.500 02:20:58 -- setup/hugepages.sh@51 -- # shift 00:04:18.500 02:20:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.500 02:20:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.500 02:20:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.500 02:20:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.500 02:20:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.500 02:20:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.500 02:20:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.500 02:20:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.500 02:20:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:18.500 02:20:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.500 02:20:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.500 02:20:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.500 02:20:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.500 02:20:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:18.500 02:20:58 -- setup/hugepages.sh@73 -- # return 0 00:04:18.500 02:20:58 -- setup/hugepages.sh@198 -- # setup output 00:04:18.500 02:20:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.500 02:20:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.763 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.763 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:18.763 02:20:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:18.763 02:20:59 -- setup/hugepages.sh@89 -- # local node 00:04:18.763 02:20:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.763 02:20:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.763 02:20:59 -- setup/hugepages.sh@92 -- # local surp 00:04:18.763 02:20:59 -- setup/hugepages.sh@93 -- # local resv 00:04:18.763 02:20:59 -- setup/hugepages.sh@94 -- # local anon 00:04:18.763 02:20:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.763 02:20:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.763 02:20:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.763 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:18.763 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:18.763 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.763 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.763 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.763 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.763 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.763 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.763 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.763 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7967488 kB' 'MemAvailable: 9478492 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 498132 kB' 'Inactive: 1344892 kB' 'Active(anon): 128976 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120100 kB' 'Mapped: 50752 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163028 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95224 kB' 'KernelStack: 6360 kB' 'PageTables: 4656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.764 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.764 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.765 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:18.765 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:18.765 02:20:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:18.765 02:20:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.765 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.765 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:18.765 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:18.765 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.765 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.765 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.765 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.765 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.765 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7967488 kB' 'MemAvailable: 9478496 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 498140 kB' 'Inactive: 1344896 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120072 kB' 'Mapped: 50752 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163024 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95220 kB' 'KernelStack: 6328 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.765 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.765 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.766 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:18.766 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:18.766 02:20:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:18.766 02:20:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.766 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.766 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:18.766 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:18.766 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:18.766 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.766 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.766 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.766 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.766 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7967488 kB' 'MemAvailable: 9478496 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 497960 kB' 'Inactive: 1344896 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 50624 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163028 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95224 kB' 'KernelStack: 6304 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:18.766 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.766 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.027 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.027 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.028 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:19.028 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.028 02:20:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.028 nr_hugepages=1024 00:04:19.028 02:20:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.028 resv_hugepages=0 00:04:19.028 02:20:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.028 surplus_hugepages=0 00:04:19.028 02:20:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.028 anon_hugepages=0 00:04:19.028 02:20:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.028 02:20:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.028 02:20:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.028 02:20:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.028 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.028 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:19.028 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.028 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.028 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.028 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.028 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.028 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.028 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7968176 kB' 'MemAvailable: 9479184 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 497924 kB' 'Inactive: 1344896 kB' 'Active(anon): 128768 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119872 kB' 'Mapped: 50624 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163028 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95224 kB' 'KernelStack: 6304 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.028 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.028 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.029 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.029 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.030 02:20:59 -- setup/common.sh@33 -- # echo 1024 00:04:19.030 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.030 02:20:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.030 02:20:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.030 02:20:59 -- setup/hugepages.sh@27 -- # local node 00:04:19.030 02:20:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.030 02:20:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.030 02:20:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.030 02:20:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.030 02:20:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.030 02:20:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.030 02:20:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.030 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.030 02:20:59 -- setup/common.sh@18 -- # local node=0 00:04:19.030 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.030 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.030 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.030 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.030 02:20:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.030 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.030 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7967924 kB' 'MemUsed: 4271196 kB' 'SwapCached: 0 kB' 'Active: 497756 kB' 'Inactive: 1344896 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724536 kB' 'Mapped: 50624 kB' 'AnonPages: 119712 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 163028 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.030 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.030 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.031 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:19.031 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.031 02:20:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.031 02:20:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.031 02:20:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.031 02:20:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.031 node0=1024 expecting 1024 00:04:19.031 02:20:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.031 02:20:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.031 02:20:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:19.031 02:20:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:19.031 02:20:59 -- setup/hugepages.sh@202 -- # setup output 00:04:19.031 02:20:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.031 02:20:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.292 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.292 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:19.292 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:19.292 02:20:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:19.292 02:20:59 -- setup/hugepages.sh@89 -- # local node 00:04:19.292 02:20:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.292 02:20:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.292 02:20:59 -- setup/hugepages.sh@92 -- # local surp 00:04:19.292 02:20:59 -- setup/hugepages.sh@93 -- # local resv 00:04:19.292 02:20:59 -- setup/hugepages.sh@94 -- # local anon 00:04:19.292 02:20:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.292 02:20:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.292 02:20:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.292 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:19.292 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.292 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.292 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.292 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.292 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.292 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.292 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7968876 kB' 'MemAvailable: 9479884 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 498720 kB' 'Inactive: 1344896 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120760 kB' 'Mapped: 50948 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163040 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95236 kB' 'KernelStack: 6488 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.292 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.292 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.293 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:19.293 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.293 02:20:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:19.293 02:20:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.293 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.293 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:19.293 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.293 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.293 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.293 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.293 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.293 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.293 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7968880 kB' 'MemAvailable: 9479888 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 498124 kB' 'Inactive: 1344896 kB' 'Active(anon): 128968 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120100 kB' 'Mapped: 50828 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163040 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95236 kB' 'KernelStack: 6384 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.293 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.293 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.294 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.294 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.295 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.295 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.295 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:19.295 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.295 02:20:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:19.295 02:20:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.295 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.295 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:19.295 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.295 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.295 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.295 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.295 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.295 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.556 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7968880 kB' 'MemAvailable: 9479888 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 498384 kB' 'Inactive: 1344896 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120100 kB' 'Mapped: 50828 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163040 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95236 kB' 'KernelStack: 6384 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.556 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.556 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.557 02:20:59 -- setup/common.sh@33 -- # echo 0 00:04:19.557 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.557 02:20:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:19.557 nr_hugepages=1024 00:04:19.557 02:20:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.557 resv_hugepages=0 00:04:19.557 02:20:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.557 02:20:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.557 surplus_hugepages=0 00:04:19.557 anon_hugepages=0 00:04:19.557 02:20:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.557 02:20:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.557 02:20:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.557 02:20:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.557 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.557 02:20:59 -- setup/common.sh@18 -- # local node= 00:04:19.557 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.557 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.557 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.557 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.557 02:20:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.557 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.557 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.557 02:20:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7968880 kB' 'MemAvailable: 9479888 kB' 'Buffers: 2684 kB' 'Cached: 1721852 kB' 'SwapCached: 0 kB' 'Active: 498000 kB' 'Inactive: 1344896 kB' 'Active(anon): 128844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119984 kB' 'Mapped: 50624 kB' 'Shmem: 10484 kB' 'KReclaimable: 67804 kB' 'Slab: 163028 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95224 kB' 'KernelStack: 6352 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 322576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 194412 kB' 'DirectMap2M: 6096896 kB' 'DirectMap1G: 8388608 kB' 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.557 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.557 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.558 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.558 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # continue 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:20:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:20:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.559 02:20:59 -- setup/common.sh@33 -- # echo 1024 00:04:19.559 02:20:59 -- setup/common.sh@33 -- # return 0 00:04:19.559 02:20:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.559 02:20:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.559 02:20:59 -- setup/hugepages.sh@27 -- # local node 00:04:19.559 02:20:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.559 02:20:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.559 02:20:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.559 02:20:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.559 02:20:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.559 02:20:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.559 02:20:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.559 02:20:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.559 02:20:59 -- setup/common.sh@18 -- # local node=0 00:04:19.559 02:20:59 -- setup/common.sh@19 -- # local var val 00:04:19.559 02:20:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:19.559 02:20:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.559 02:20:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.559 02:20:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.559 02:20:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.559 02:20:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.559 02:21:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7968880 kB' 'MemUsed: 4270240 kB' 'SwapCached: 0 kB' 'Active: 498016 kB' 'Inactive: 1344896 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 369156 kB' 'Inactive(file): 1344896 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1724536 kB' 'Mapped: 50624 kB' 'AnonPages: 119980 kB' 'Shmem: 10484 kB' 'KernelStack: 6352 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 67804 kB' 'Slab: 163028 kB' 'SReclaimable: 67804 kB' 'SUnreclaim: 95224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.559 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.559 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # continue 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:19.560 02:21:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:19.560 02:21:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.560 02:21:00 -- setup/common.sh@33 -- # echo 0 00:04:19.560 02:21:00 -- setup/common.sh@33 -- # return 0 00:04:19.560 02:21:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.560 02:21:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.560 02:21:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.560 02:21:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.560 node0=1024 expecting 1024 00:04:19.560 02:21:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.560 02:21:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.560 00:04:19.560 real 0m1.105s 00:04:19.560 user 0m0.552s 00:04:19.560 sys 0m0.623s 00:04:19.560 02:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.560 02:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:19.560 ************************************ 00:04:19.560 END TEST no_shrink_alloc 00:04:19.560 ************************************ 00:04:19.560 02:21:00 -- setup/hugepages.sh@217 -- # clear_hp 00:04:19.560 02:21:00 -- setup/hugepages.sh@37 -- # local node hp 00:04:19.560 02:21:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.560 02:21:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.560 02:21:00 -- setup/hugepages.sh@41 -- # echo 0 00:04:19.560 02:21:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.560 02:21:00 -- setup/hugepages.sh@41 -- # echo 0 00:04:19.560 02:21:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:19.560 02:21:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:19.561 00:04:19.561 real 0m4.868s 00:04:19.561 user 0m2.329s 00:04:19.561 sys 0m2.670s 00:04:19.561 02:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:19.561 02:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:19.561 ************************************ 00:04:19.561 END TEST hugepages 00:04:19.561 ************************************ 00:04:19.561 02:21:00 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:19.561 02:21:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.561 02:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.561 02:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:19.561 ************************************ 00:04:19.561 START TEST driver 00:04:19.561 ************************************ 00:04:19.561 02:21:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:19.820 * Looking for test storage... 00:04:19.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:19.820 02:21:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:19.820 02:21:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:19.820 02:21:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:19.820 02:21:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:19.820 02:21:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:19.820 02:21:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:19.820 02:21:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:19.820 02:21:00 -- scripts/common.sh@335 -- # IFS=.-: 00:04:19.820 02:21:00 -- scripts/common.sh@335 -- # read -ra ver1 00:04:19.820 02:21:00 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.820 02:21:00 -- scripts/common.sh@336 -- # read -ra ver2 00:04:19.820 02:21:00 -- scripts/common.sh@337 -- # local 'op=<' 00:04:19.820 02:21:00 -- scripts/common.sh@339 -- # ver1_l=2 00:04:19.820 02:21:00 -- scripts/common.sh@340 -- # ver2_l=1 00:04:19.820 02:21:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:19.820 02:21:00 -- scripts/common.sh@343 -- # case "$op" in 00:04:19.820 02:21:00 -- scripts/common.sh@344 -- # : 1 00:04:19.820 02:21:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:19.820 02:21:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.820 02:21:00 -- scripts/common.sh@364 -- # decimal 1 00:04:19.820 02:21:00 -- scripts/common.sh@352 -- # local d=1 00:04:19.820 02:21:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.820 02:21:00 -- scripts/common.sh@354 -- # echo 1 00:04:19.820 02:21:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:19.820 02:21:00 -- scripts/common.sh@365 -- # decimal 2 00:04:19.820 02:21:00 -- scripts/common.sh@352 -- # local d=2 00:04:19.820 02:21:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.820 02:21:00 -- scripts/common.sh@354 -- # echo 2 00:04:19.820 02:21:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:19.820 02:21:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:19.820 02:21:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:19.820 02:21:00 -- scripts/common.sh@367 -- # return 0 00:04:19.820 02:21:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.820 02:21:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:19.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.820 --rc genhtml_branch_coverage=1 00:04:19.820 --rc genhtml_function_coverage=1 00:04:19.820 --rc genhtml_legend=1 00:04:19.820 --rc geninfo_all_blocks=1 00:04:19.820 --rc geninfo_unexecuted_blocks=1 00:04:19.820 00:04:19.820 ' 00:04:19.820 02:21:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:19.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.820 --rc genhtml_branch_coverage=1 00:04:19.820 --rc genhtml_function_coverage=1 00:04:19.820 --rc genhtml_legend=1 00:04:19.820 --rc geninfo_all_blocks=1 00:04:19.820 --rc geninfo_unexecuted_blocks=1 00:04:19.820 00:04:19.820 ' 00:04:19.820 02:21:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:19.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.820 --rc genhtml_branch_coverage=1 00:04:19.820 --rc genhtml_function_coverage=1 00:04:19.820 --rc genhtml_legend=1 00:04:19.820 --rc geninfo_all_blocks=1 00:04:19.820 --rc geninfo_unexecuted_blocks=1 00:04:19.820 00:04:19.820 ' 00:04:19.820 02:21:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:19.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.820 --rc genhtml_branch_coverage=1 00:04:19.820 --rc genhtml_function_coverage=1 00:04:19.820 --rc genhtml_legend=1 00:04:19.820 --rc geninfo_all_blocks=1 00:04:19.820 --rc geninfo_unexecuted_blocks=1 00:04:19.820 00:04:19.820 ' 00:04:19.820 02:21:00 -- setup/driver.sh@68 -- # setup reset 00:04:19.820 02:21:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.820 02:21:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.388 02:21:00 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.388 02:21:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.388 02:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.388 02:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.388 ************************************ 00:04:20.388 START TEST guess_driver 00:04:20.388 ************************************ 00:04:20.388 02:21:00 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:20.388 02:21:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.388 02:21:00 -- setup/driver.sh@47 -- # local fail=0 00:04:20.388 02:21:00 -- setup/driver.sh@49 -- # pick_driver 00:04:20.388 02:21:00 -- setup/driver.sh@36 -- # vfio 00:04:20.388 02:21:00 -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.388 02:21:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.388 02:21:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.388 02:21:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.388 02:21:00 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:20.388 02:21:00 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:20.388 02:21:00 -- setup/driver.sh@32 -- # return 1 00:04:20.388 02:21:00 -- setup/driver.sh@38 -- # uio 00:04:20.388 02:21:00 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:20.388 02:21:00 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:20.388 02:21:00 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:20.388 02:21:00 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:20.388 02:21:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:20.388 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:20.388 02:21:00 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:20.388 Looking for driver=uio_pci_generic 00:04:20.388 02:21:00 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:20.388 02:21:00 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.388 02:21:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:20.388 02:21:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.388 02:21:00 -- setup/driver.sh@45 -- # setup output config 00:04:20.388 02:21:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.388 02:21:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.323 02:21:01 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:21.323 02:21:01 -- setup/driver.sh@58 -- # continue 00:04:21.323 02:21:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.323 02:21:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.323 02:21:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:21.323 02:21:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.323 02:21:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:21.323 02:21:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:21.323 02:21:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:21.323 02:21:01 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:21.323 02:21:01 -- setup/driver.sh@65 -- # setup reset 00:04:21.323 02:21:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.323 02:21:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.890 00:04:21.890 real 0m1.476s 00:04:21.890 user 0m0.570s 00:04:21.890 sys 0m0.908s 00:04:21.890 02:21:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.890 ************************************ 00:04:21.890 END TEST guess_driver 00:04:21.890 02:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:21.890 ************************************ 00:04:21.890 00:04:21.890 real 0m2.318s 00:04:21.890 user 0m0.893s 00:04:21.890 sys 0m1.494s 00:04:21.890 02:21:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:21.890 02:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:21.890 ************************************ 00:04:21.890 END TEST driver 00:04:21.890 ************************************ 00:04:21.890 02:21:02 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:21.890 02:21:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:21.890 02:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:21.890 02:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:21.890 ************************************ 00:04:21.890 START TEST devices 00:04:21.890 ************************************ 00:04:21.890 02:21:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.149 * Looking for test storage... 00:04:22.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.149 02:21:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:22.149 02:21:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:22.149 02:21:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:22.149 02:21:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:22.149 02:21:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:22.149 02:21:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:22.149 02:21:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:22.149 02:21:02 -- scripts/common.sh@335 -- # IFS=.-: 00:04:22.149 02:21:02 -- scripts/common.sh@335 -- # read -ra ver1 00:04:22.149 02:21:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.149 02:21:02 -- scripts/common.sh@336 -- # read -ra ver2 00:04:22.149 02:21:02 -- scripts/common.sh@337 -- # local 'op=<' 00:04:22.149 02:21:02 -- scripts/common.sh@339 -- # ver1_l=2 00:04:22.149 02:21:02 -- scripts/common.sh@340 -- # ver2_l=1 00:04:22.149 02:21:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:22.149 02:21:02 -- scripts/common.sh@343 -- # case "$op" in 00:04:22.149 02:21:02 -- scripts/common.sh@344 -- # : 1 00:04:22.149 02:21:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:22.149 02:21:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.149 02:21:02 -- scripts/common.sh@364 -- # decimal 1 00:04:22.149 02:21:02 -- scripts/common.sh@352 -- # local d=1 00:04:22.149 02:21:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.149 02:21:02 -- scripts/common.sh@354 -- # echo 1 00:04:22.149 02:21:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:22.149 02:21:02 -- scripts/common.sh@365 -- # decimal 2 00:04:22.149 02:21:02 -- scripts/common.sh@352 -- # local d=2 00:04:22.149 02:21:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.149 02:21:02 -- scripts/common.sh@354 -- # echo 2 00:04:22.149 02:21:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:22.149 02:21:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:22.149 02:21:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:22.149 02:21:02 -- scripts/common.sh@367 -- # return 0 00:04:22.149 02:21:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.149 02:21:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.149 --rc genhtml_branch_coverage=1 00:04:22.149 --rc genhtml_function_coverage=1 00:04:22.149 --rc genhtml_legend=1 00:04:22.149 --rc geninfo_all_blocks=1 00:04:22.149 --rc geninfo_unexecuted_blocks=1 00:04:22.149 00:04:22.149 ' 00:04:22.149 02:21:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.149 --rc genhtml_branch_coverage=1 00:04:22.149 --rc genhtml_function_coverage=1 00:04:22.149 --rc genhtml_legend=1 00:04:22.149 --rc geninfo_all_blocks=1 00:04:22.149 --rc geninfo_unexecuted_blocks=1 00:04:22.149 00:04:22.149 ' 00:04:22.149 02:21:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.149 --rc genhtml_branch_coverage=1 00:04:22.149 --rc genhtml_function_coverage=1 00:04:22.149 --rc genhtml_legend=1 00:04:22.149 --rc geninfo_all_blocks=1 00:04:22.149 --rc geninfo_unexecuted_blocks=1 00:04:22.149 00:04:22.149 ' 00:04:22.149 02:21:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:22.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.149 --rc genhtml_branch_coverage=1 00:04:22.149 --rc genhtml_function_coverage=1 00:04:22.149 --rc genhtml_legend=1 00:04:22.149 --rc geninfo_all_blocks=1 00:04:22.149 --rc geninfo_unexecuted_blocks=1 00:04:22.149 00:04:22.149 ' 00:04:22.149 02:21:02 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.149 02:21:02 -- setup/devices.sh@192 -- # setup reset 00:04:22.149 02:21:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.149 02:21:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.085 02:21:03 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.085 02:21:03 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:23.085 02:21:03 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:23.085 02:21:03 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:23.085 02:21:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.085 02:21:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:23.085 02:21:03 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:23.085 02:21:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.085 02:21:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:23.085 02:21:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:23.085 02:21:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.085 02:21:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:23.085 02:21:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:23.085 02:21:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:23.085 02:21:03 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:23.085 02:21:03 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:23.085 02:21:03 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:23.085 02:21:03 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:23.085 02:21:03 -- setup/devices.sh@196 -- # blocks=() 00:04:23.085 02:21:03 -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.085 02:21:03 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.085 02:21:03 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.085 02:21:03 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.085 02:21:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.085 02:21:03 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:23.085 02:21:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.085 02:21:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:23.085 02:21:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.085 No valid GPT data, bailing 00:04:23.085 02:21:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.085 02:21:03 -- scripts/common.sh@393 -- # pt= 00:04:23.085 02:21:03 -- scripts/common.sh@394 -- # return 1 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.085 02:21:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.085 02:21:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.085 02:21:03 -- setup/common.sh@80 -- # echo 5368709120 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:23.085 02:21:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.085 02:21:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:23.085 02:21:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.085 02:21:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:23.085 02:21:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:23.085 02:21:03 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:23.085 02:21:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:23.085 No valid GPT data, bailing 00:04:23.085 02:21:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.085 02:21:03 -- scripts/common.sh@393 -- # pt= 00:04:23.085 02:21:03 -- scripts/common.sh@394 -- # return 1 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:23.085 02:21:03 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:23.085 02:21:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:23.085 02:21:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.085 02:21:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.085 02:21:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:23.085 02:21:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.085 02:21:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:23.085 02:21:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:23.085 02:21:03 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:23.085 02:21:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:23.085 No valid GPT data, bailing 00:04:23.085 02:21:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:23.085 02:21:03 -- scripts/common.sh@393 -- # pt= 00:04:23.085 02:21:03 -- scripts/common.sh@394 -- # return 1 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:23.085 02:21:03 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:23.085 02:21:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:23.085 02:21:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.085 02:21:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.085 02:21:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:23.085 02:21:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:23.085 02:21:03 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.085 02:21:03 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:23.085 02:21:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:23.085 02:21:03 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:23.085 02:21:03 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:23.085 02:21:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:23.344 No valid GPT data, bailing 00:04:23.344 02:21:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:23.344 02:21:03 -- scripts/common.sh@393 -- # pt= 00:04:23.344 02:21:03 -- scripts/common.sh@394 -- # return 1 00:04:23.344 02:21:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:23.344 02:21:03 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:23.344 02:21:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:23.344 02:21:03 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.344 02:21:03 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.344 02:21:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.344 02:21:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:23.344 02:21:03 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:23.344 02:21:03 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.344 02:21:03 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.344 02:21:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.344 02:21:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.344 02:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:23.344 ************************************ 00:04:23.344 START TEST nvme_mount 00:04:23.344 ************************************ 00:04:23.344 02:21:03 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:23.344 02:21:03 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.344 02:21:03 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.344 02:21:03 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.344 02:21:03 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.344 02:21:03 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.344 02:21:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.344 02:21:03 -- setup/common.sh@40 -- # local part_no=1 00:04:23.344 02:21:03 -- setup/common.sh@41 -- # local size=1073741824 00:04:23.344 02:21:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.344 02:21:03 -- setup/common.sh@44 -- # parts=() 00:04:23.345 02:21:03 -- setup/common.sh@44 -- # local parts 00:04:23.345 02:21:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.345 02:21:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.345 02:21:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.345 02:21:03 -- setup/common.sh@46 -- # (( part++ )) 00:04:23.345 02:21:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.345 02:21:03 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:23.345 02:21:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.345 02:21:03 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.281 Creating new GPT entries in memory. 00:04:24.281 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.281 other utilities. 00:04:24.281 02:21:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.281 02:21:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.281 02:21:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.281 02:21:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.281 02:21:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:25.217 Creating new GPT entries in memory. 00:04:25.217 The operation has completed successfully. 00:04:25.217 02:21:05 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.217 02:21:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.217 02:21:05 -- setup/common.sh@62 -- # wait 53779 00:04:25.476 02:21:05 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.476 02:21:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:25.476 02:21:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.476 02:21:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.476 02:21:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.476 02:21:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.476 02:21:05 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.476 02:21:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:25.476 02:21:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.476 02:21:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:25.476 02:21:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:25.476 02:21:05 -- setup/devices.sh@53 -- # local found=0 00:04:25.476 02:21:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.476 02:21:05 -- setup/devices.sh@56 -- # : 00:04:25.476 02:21:05 -- setup/devices.sh@59 -- # local pci status 00:04:25.476 02:21:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.476 02:21:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:25.476 02:21:05 -- setup/devices.sh@47 -- # setup output config 00:04:25.476 02:21:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.476 02:21:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.476 02:21:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.476 02:21:06 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:25.476 02:21:06 -- setup/devices.sh@63 -- # found=1 00:04:25.476 02:21:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.476 02:21:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.476 02:21:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 02:21:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.045 02:21:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 02:21:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.045 02:21:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.045 02:21:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.045 02:21:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:26.045 02:21:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.045 02:21:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.045 02:21:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.045 02:21:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:26.045 02:21:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.045 02:21:06 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.045 02:21:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.045 02:21:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.045 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.045 02:21:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.045 02:21:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.303 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.303 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.303 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.303 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.303 02:21:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:26.303 02:21:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:26.303 02:21:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.303 02:21:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:26.303 02:21:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:26.303 02:21:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.304 02:21:06 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.304 02:21:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:26.304 02:21:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:26.304 02:21:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.304 02:21:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.304 02:21:06 -- setup/devices.sh@53 -- # local found=0 00:04:26.304 02:21:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.304 02:21:06 -- setup/devices.sh@56 -- # : 00:04:26.304 02:21:06 -- setup/devices.sh@59 -- # local pci status 00:04:26.304 02:21:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.304 02:21:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:26.304 02:21:06 -- setup/devices.sh@47 -- # setup output config 00:04:26.304 02:21:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.304 02:21:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.562 02:21:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.562 02:21:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:26.562 02:21:07 -- setup/devices.sh@63 -- # found=1 00:04:26.562 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.562 02:21:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.562 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.820 02:21:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.821 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.079 02:21:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.079 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.079 02:21:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.079 02:21:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:27.079 02:21:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.079 02:21:07 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.079 02:21:07 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.079 02:21:07 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.079 02:21:07 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:27.079 02:21:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:27.079 02:21:07 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:27.079 02:21:07 -- setup/devices.sh@50 -- # local mount_point= 00:04:27.079 02:21:07 -- setup/devices.sh@51 -- # local test_file= 00:04:27.079 02:21:07 -- setup/devices.sh@53 -- # local found=0 00:04:27.079 02:21:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.079 02:21:07 -- setup/devices.sh@59 -- # local pci status 00:04:27.079 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.079 02:21:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:27.079 02:21:07 -- setup/devices.sh@47 -- # setup output config 00:04:27.079 02:21:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.080 02:21:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.338 02:21:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.338 02:21:07 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:27.338 02:21:07 -- setup/devices.sh@63 -- # found=1 00:04:27.338 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.338 02:21:07 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.338 02:21:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.598 02:21:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.598 02:21:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.857 02:21:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:27.857 02:21:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.857 02:21:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.857 02:21:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.857 02:21:08 -- setup/devices.sh@68 -- # return 0 00:04:27.857 02:21:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:27.857 02:21:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.857 02:21:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.857 02:21:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.857 02:21:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.857 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.857 00:04:27.857 real 0m4.584s 00:04:27.857 user 0m1.027s 00:04:27.857 sys 0m1.236s 00:04:27.857 02:21:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:27.857 02:21:08 -- common/autotest_common.sh@10 -- # set +x 00:04:27.857 ************************************ 00:04:27.857 END TEST nvme_mount 00:04:27.857 ************************************ 00:04:27.857 02:21:08 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:27.857 02:21:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.857 02:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.857 02:21:08 -- common/autotest_common.sh@10 -- # set +x 00:04:27.857 ************************************ 00:04:27.857 START TEST dm_mount 00:04:27.857 ************************************ 00:04:27.857 02:21:08 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:27.857 02:21:08 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:27.857 02:21:08 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:27.857 02:21:08 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:27.857 02:21:08 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:27.857 02:21:08 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.857 02:21:08 -- setup/common.sh@40 -- # local part_no=2 00:04:27.857 02:21:08 -- setup/common.sh@41 -- # local size=1073741824 00:04:27.857 02:21:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.857 02:21:08 -- setup/common.sh@44 -- # parts=() 00:04:27.857 02:21:08 -- setup/common.sh@44 -- # local parts 00:04:27.857 02:21:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.857 02:21:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.857 02:21:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.857 02:21:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.857 02:21:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.857 02:21:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.857 02:21:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.857 02:21:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.857 02:21:08 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:27.857 02:21:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.857 02:21:08 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:29.234 Creating new GPT entries in memory. 00:04:29.234 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.234 other utilities. 00:04:29.234 02:21:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.234 02:21:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.234 02:21:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.234 02:21:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.234 02:21:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:30.171 Creating new GPT entries in memory. 00:04:30.171 The operation has completed successfully. 00:04:30.171 02:21:10 -- setup/common.sh@57 -- # (( part++ )) 00:04:30.171 02:21:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.171 02:21:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.171 02:21:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.171 02:21:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:31.108 The operation has completed successfully. 00:04:31.108 02:21:11 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.108 02:21:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.108 02:21:11 -- setup/common.sh@62 -- # wait 54239 00:04:31.108 02:21:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.108 02:21:11 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.108 02:21:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.108 02:21:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.108 02:21:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.108 02:21:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.108 02:21:11 -- setup/devices.sh@161 -- # break 00:04:31.108 02:21:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.108 02:21:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.108 02:21:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.108 02:21:11 -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.108 02:21:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:31.108 02:21:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:31.108 02:21:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.108 02:21:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:31.108 02:21:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.108 02:21:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.108 02:21:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.108 02:21:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.108 02:21:11 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.109 02:21:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.109 02:21:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.109 02:21:11 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.109 02:21:11 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.109 02:21:11 -- setup/devices.sh@53 -- # local found=0 00:04:31.109 02:21:11 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.109 02:21:11 -- setup/devices.sh@56 -- # : 00:04:31.109 02:21:11 -- setup/devices.sh@59 -- # local pci status 00:04:31.109 02:21:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.109 02:21:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.109 02:21:11 -- setup/devices.sh@47 -- # setup output config 00:04:31.109 02:21:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.109 02:21:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.367 02:21:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.367 02:21:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:31.367 02:21:11 -- setup/devices.sh@63 -- # found=1 00:04:31.367 02:21:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.367 02:21:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.367 02:21:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.625 02:21:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.625 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.625 02:21:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.625 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.883 02:21:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.883 02:21:12 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:31.883 02:21:12 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.883 02:21:12 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.883 02:21:12 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:31.883 02:21:12 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:31.883 02:21:12 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:31.883 02:21:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.883 02:21:12 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:31.883 02:21:12 -- setup/devices.sh@50 -- # local mount_point= 00:04:31.883 02:21:12 -- setup/devices.sh@51 -- # local test_file= 00:04:31.883 02:21:12 -- setup/devices.sh@53 -- # local found=0 00:04:31.883 02:21:12 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:31.883 02:21:12 -- setup/devices.sh@59 -- # local pci status 00:04:31.883 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.883 02:21:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.883 02:21:12 -- setup/devices.sh@47 -- # setup output config 00:04:31.883 02:21:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.883 02:21:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.883 02:21:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.884 02:21:12 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:31.884 02:21:12 -- setup/devices.sh@63 -- # found=1 00:04:31.884 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.884 02:21:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.884 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.457 02:21:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.457 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.457 02:21:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.457 02:21:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.457 02:21:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.457 02:21:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:32.457 02:21:12 -- setup/devices.sh@68 -- # return 0 00:04:32.457 02:21:12 -- setup/devices.sh@187 -- # cleanup_dm 00:04:32.457 02:21:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:32.457 02:21:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:32.457 02:21:12 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:32.457 02:21:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.457 02:21:13 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:32.457 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:32.457 02:21:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:32.457 02:21:13 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:32.457 00:04:32.457 real 0m4.621s 00:04:32.457 user 0m0.687s 00:04:32.457 sys 0m0.850s 00:04:32.457 02:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:32.457 02:21:13 -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 ************************************ 00:04:32.457 END TEST dm_mount 00:04:32.457 ************************************ 00:04:32.457 02:21:13 -- setup/devices.sh@1 -- # cleanup 00:04:32.457 02:21:13 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:32.457 02:21:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.457 02:21:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:32.457 02:21:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:32.747 02:21:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:32.747 02:21:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:32.747 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.747 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:32.747 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:32.747 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:32.747 02:21:13 -- setup/devices.sh@12 -- # cleanup_dm 00:04:32.747 02:21:13 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:33.017 02:21:13 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:33.018 02:21:13 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.018 02:21:13 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:33.018 02:21:13 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.018 02:21:13 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:33.018 00:04:33.018 real 0m10.880s 00:04:33.018 user 0m2.444s 00:04:33.018 sys 0m2.736s 00:04:33.018 02:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.018 ************************************ 00:04:33.018 02:21:13 -- common/autotest_common.sh@10 -- # set +x 00:04:33.018 END TEST devices 00:04:33.018 ************************************ 00:04:33.018 00:04:33.018 real 0m22.938s 00:04:33.018 user 0m7.859s 00:04:33.018 sys 0m9.565s 00:04:33.018 02:21:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:33.018 02:21:13 -- common/autotest_common.sh@10 -- # set +x 00:04:33.018 ************************************ 00:04:33.018 END TEST setup.sh 00:04:33.018 ************************************ 00:04:33.018 02:21:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:33.018 Hugepages 00:04:33.018 node hugesize free / total 00:04:33.018 node0 1048576kB 0 / 0 00:04:33.018 node0 2048kB 2048 / 2048 00:04:33.018 00:04:33.018 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.276 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:33.276 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:33.276 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:33.276 02:21:13 -- spdk/autotest.sh@128 -- # uname -s 00:04:33.276 02:21:13 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:33.276 02:21:13 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:33.276 02:21:13 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.211 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.211 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.211 02:21:14 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:35.147 02:21:15 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:35.147 02:21:15 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:35.147 02:21:15 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.147 02:21:15 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:35.147 02:21:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:35.147 02:21:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:35.147 02:21:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.147 02:21:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:35.147 02:21:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:35.406 02:21:15 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:35.406 02:21:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:35.406 02:21:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.664 Waiting for block devices as requested 00:04:35.664 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.664 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.921 02:21:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:35.922 02:21:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:35.922 02:21:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:35.922 02:21:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:35.922 02:21:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:35.922 02:21:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1552 -- # continue 00:04:35.922 02:21:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:35.922 02:21:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:35.922 02:21:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:35.922 02:21:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:35.922 02:21:16 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:35.922 02:21:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:35.922 02:21:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:35.922 02:21:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:35.922 02:21:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:35.922 02:21:16 -- common/autotest_common.sh@1552 -- # continue 00:04:35.922 02:21:16 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:35.922 02:21:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.922 02:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:35.922 02:21:16 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:35.922 02:21:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.922 02:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:35.922 02:21:16 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.856 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.856 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.856 02:21:17 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:36.856 02:21:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.856 02:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:36.856 02:21:17 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:36.856 02:21:17 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:36.856 02:21:17 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.856 02:21:17 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:36.856 02:21:17 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:36.856 02:21:17 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:36.856 02:21:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:36.856 02:21:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:36.856 02:21:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.856 02:21:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.856 02:21:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:36.856 02:21:17 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:36.856 02:21:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:37.113 02:21:17 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:37.113 02:21:17 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:37.113 02:21:17 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:37.113 02:21:17 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.114 02:21:17 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:37.114 02:21:17 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:37.114 02:21:17 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:37.114 02:21:17 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.114 02:21:17 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:37.114 02:21:17 -- common/autotest_common.sh@1588 -- # return 0 00:04:37.114 02:21:17 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:37.114 02:21:17 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:37.114 02:21:17 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:37.114 02:21:17 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:37.114 02:21:17 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:37.114 02:21:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.114 02:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:37.114 02:21:17 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:37.114 02:21:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.114 02:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:37.114 ************************************ 00:04:37.114 START TEST env 00:04:37.114 ************************************ 00:04:37.114 02:21:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:37.114 * Looking for test storage... 00:04:37.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:37.114 02:21:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:37.114 02:21:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:37.114 02:21:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:37.114 02:21:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:37.114 02:21:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:37.114 02:21:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:37.114 02:21:17 -- scripts/common.sh@335 -- # IFS=.-: 00:04:37.114 02:21:17 -- scripts/common.sh@335 -- # read -ra ver1 00:04:37.114 02:21:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.114 02:21:17 -- scripts/common.sh@336 -- # read -ra ver2 00:04:37.114 02:21:17 -- scripts/common.sh@337 -- # local 'op=<' 00:04:37.114 02:21:17 -- scripts/common.sh@339 -- # ver1_l=2 00:04:37.114 02:21:17 -- scripts/common.sh@340 -- # ver2_l=1 00:04:37.114 02:21:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:37.114 02:21:17 -- scripts/common.sh@343 -- # case "$op" in 00:04:37.114 02:21:17 -- scripts/common.sh@344 -- # : 1 00:04:37.114 02:21:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:37.114 02:21:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.114 02:21:17 -- scripts/common.sh@364 -- # decimal 1 00:04:37.114 02:21:17 -- scripts/common.sh@352 -- # local d=1 00:04:37.114 02:21:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.114 02:21:17 -- scripts/common.sh@354 -- # echo 1 00:04:37.114 02:21:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:37.114 02:21:17 -- scripts/common.sh@365 -- # decimal 2 00:04:37.114 02:21:17 -- scripts/common.sh@352 -- # local d=2 00:04:37.114 02:21:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.114 02:21:17 -- scripts/common.sh@354 -- # echo 2 00:04:37.114 02:21:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:37.114 02:21:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:37.114 02:21:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:37.114 02:21:17 -- scripts/common.sh@367 -- # return 0 00:04:37.114 02:21:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:37.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.114 --rc genhtml_branch_coverage=1 00:04:37.114 --rc genhtml_function_coverage=1 00:04:37.114 --rc genhtml_legend=1 00:04:37.114 --rc geninfo_all_blocks=1 00:04:37.114 --rc geninfo_unexecuted_blocks=1 00:04:37.114 00:04:37.114 ' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:37.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.114 --rc genhtml_branch_coverage=1 00:04:37.114 --rc genhtml_function_coverage=1 00:04:37.114 --rc genhtml_legend=1 00:04:37.114 --rc geninfo_all_blocks=1 00:04:37.114 --rc geninfo_unexecuted_blocks=1 00:04:37.114 00:04:37.114 ' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:37.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.114 --rc genhtml_branch_coverage=1 00:04:37.114 --rc genhtml_function_coverage=1 00:04:37.114 --rc genhtml_legend=1 00:04:37.114 --rc geninfo_all_blocks=1 00:04:37.114 --rc geninfo_unexecuted_blocks=1 00:04:37.114 00:04:37.114 ' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:37.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.114 --rc genhtml_branch_coverage=1 00:04:37.114 --rc genhtml_function_coverage=1 00:04:37.114 --rc genhtml_legend=1 00:04:37.114 --rc geninfo_all_blocks=1 00:04:37.114 --rc geninfo_unexecuted_blocks=1 00:04:37.114 00:04:37.114 ' 00:04:37.114 02:21:17 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:37.114 02:21:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.114 02:21:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.114 02:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:37.114 ************************************ 00:04:37.114 START TEST env_memory 00:04:37.114 ************************************ 00:04:37.114 02:21:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:37.114 00:04:37.114 00:04:37.114 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.114 http://cunit.sourceforge.net/ 00:04:37.114 00:04:37.114 00:04:37.114 Suite: memory 00:04:37.372 Test: alloc and free memory map ...[2024-11-21 02:21:17.776140] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:37.372 passed 00:04:37.373 Test: mem map translation ...[2024-11-21 02:21:17.807125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:37.373 [2024-11-21 02:21:17.807170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:37.373 [2024-11-21 02:21:17.807225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:37.373 [2024-11-21 02:21:17.807236] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:37.373 passed 00:04:37.373 Test: mem map registration ...[2024-11-21 02:21:17.870866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:37.373 [2024-11-21 02:21:17.870898] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:37.373 passed 00:04:37.373 Test: mem map adjacent registrations ...passed 00:04:37.373 00:04:37.373 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.373 suites 1 1 n/a 0 0 00:04:37.373 tests 4 4 4 0 0 00:04:37.373 asserts 152 152 152 0 n/a 00:04:37.373 00:04:37.373 Elapsed time = 0.213 seconds 00:04:37.373 00:04:37.373 real 0m0.233s 00:04:37.373 user 0m0.216s 00:04:37.373 sys 0m0.013s 00:04:37.373 02:21:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:37.373 02:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:37.373 ************************************ 00:04:37.373 END TEST env_memory 00:04:37.373 ************************************ 00:04:37.373 02:21:18 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:37.373 02:21:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.373 02:21:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.373 02:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:37.373 ************************************ 00:04:37.373 START TEST env_vtophys 00:04:37.373 ************************************ 00:04:37.373 02:21:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:37.632 EAL: lib.eal log level changed from notice to debug 00:04:37.632 EAL: Detected lcore 0 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 1 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 2 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 3 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 4 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 5 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 6 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 7 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 8 as core 0 on socket 0 00:04:37.632 EAL: Detected lcore 9 as core 0 on socket 0 00:04:37.632 EAL: Maximum logical cores by configuration: 128 00:04:37.632 EAL: Detected CPU lcores: 10 00:04:37.632 EAL: Detected NUMA nodes: 1 00:04:37.632 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:37.632 EAL: Detected shared linkage of DPDK 00:04:37.632 EAL: No shared files mode enabled, IPC will be disabled 00:04:37.632 EAL: Selected IOVA mode 'PA' 00:04:37.632 EAL: Probing VFIO support... 00:04:37.632 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:37.632 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:37.632 EAL: Ask a virtual area of 0x2e000 bytes 00:04:37.632 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:37.632 EAL: Setting up physically contiguous memory... 00:04:37.632 EAL: Setting maximum number of open files to 524288 00:04:37.632 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:37.632 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:37.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.632 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:37.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.632 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:37.632 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:37.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.632 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:37.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.632 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:37.632 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:37.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.632 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:37.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.632 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:37.632 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:37.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.632 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:37.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.632 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:37.632 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:37.632 EAL: Hugepages will be freed exactly as allocated. 00:04:37.632 EAL: No shared files mode enabled, IPC is disabled 00:04:37.632 EAL: No shared files mode enabled, IPC is disabled 00:04:37.632 EAL: TSC frequency is ~2200000 KHz 00:04:37.632 EAL: Main lcore 0 is ready (tid=7f666af05a00;cpuset=[0]) 00:04:37.632 EAL: Trying to obtain current memory policy. 00:04:37.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 0 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 2MB 00:04:37.633 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:37.633 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:37.633 EAL: Mem event callback 'spdk:(nil)' registered 00:04:37.633 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:37.633 00:04:37.633 00:04:37.633 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.633 http://cunit.sourceforge.net/ 00:04:37.633 00:04:37.633 00:04:37.633 Suite: components_suite 00:04:37.633 Test: vtophys_malloc_test ...passed 00:04:37.633 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.633 EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.633 EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.633 EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.633 EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.633 EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.633 EAL: Trying to obtain current memory policy. 00:04:37.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.633 EAL: Restoring previous memory policy: 4 00:04:37.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.633 EAL: request: mp_malloc_sync 00:04:37.633 EAL: No shared files mode enabled, IPC is disabled 00:04:37.633 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.892 EAL: request: mp_malloc_sync 00:04:37.892 EAL: No shared files mode enabled, IPC is disabled 00:04:37.892 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.892 EAL: Trying to obtain current memory policy. 00:04:37.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.892 EAL: Restoring previous memory policy: 4 00:04:37.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.892 EAL: request: mp_malloc_sync 00:04:37.892 EAL: No shared files mode enabled, IPC is disabled 00:04:37.892 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.892 EAL: request: mp_malloc_sync 00:04:37.892 EAL: No shared files mode enabled, IPC is disabled 00:04:37.892 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.892 EAL: Trying to obtain current memory policy. 00:04:37.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.151 EAL: Restoring previous memory policy: 4 00:04:38.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.151 EAL: request: mp_malloc_sync 00:04:38.151 EAL: No shared files mode enabled, IPC is disabled 00:04:38.151 EAL: Heap on socket 0 was expanded by 514MB 00:04:38.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.410 EAL: request: mp_malloc_sync 00:04:38.410 EAL: No shared files mode enabled, IPC is disabled 00:04:38.410 EAL: Heap on socket 0 was shrunk by 514MB 00:04:38.410 EAL: Trying to obtain current memory policy. 00:04:38.410 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.669 EAL: Restoring previous memory policy: 4 00:04:38.669 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.669 EAL: request: mp_malloc_sync 00:04:38.669 EAL: No shared files mode enabled, IPC is disabled 00:04:38.669 EAL: Heap on socket 0 was expanded by 1026MB 00:04:38.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.184 passed 00:04:39.184 00:04:39.184 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.184 suites 1 1 n/a 0 0 00:04:39.184 tests 2 2 2 0 0 00:04:39.184 asserts 5316 5316 5316 0 n/a 00:04:39.184 00:04:39.184 Elapsed time = 1.615 seconds 00:04:39.184 EAL: request: mp_malloc_sync 00:04:39.184 EAL: No shared files mode enabled, IPC is disabled 00:04:39.184 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:39.184 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.184 EAL: request: mp_malloc_sync 00:04:39.184 EAL: No shared files mode enabled, IPC is disabled 00:04:39.184 EAL: Heap on socket 0 was shrunk by 2MB 00:04:39.184 EAL: No shared files mode enabled, IPC is disabled 00:04:39.184 EAL: No shared files mode enabled, IPC is disabled 00:04:39.184 EAL: No shared files mode enabled, IPC is disabled 00:04:39.442 00:04:39.442 real 0m1.813s 00:04:39.442 user 0m1.035s 00:04:39.442 sys 0m0.642s 00:04:39.442 02:21:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.442 02:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:39.442 ************************************ 00:04:39.442 END TEST env_vtophys 00:04:39.442 ************************************ 00:04:39.442 02:21:19 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:39.442 02:21:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.442 02:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.442 02:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:39.442 ************************************ 00:04:39.442 START TEST env_pci 00:04:39.442 ************************************ 00:04:39.442 02:21:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:39.442 00:04:39.442 00:04:39.442 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.442 http://cunit.sourceforge.net/ 00:04:39.442 00:04:39.442 00:04:39.442 Suite: pci 00:04:39.442 Test: pci_hook ...[2024-11-21 02:21:19.895622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 55390 has claimed it 00:04:39.442 passed 00:04:39.442 00:04:39.442 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.442 suites 1 1 n/a 0 0 00:04:39.442 tests 1 1 1 0 0 00:04:39.442 asserts 25 25 25 0 n/a 00:04:39.442 00:04:39.442 Elapsed time = 0.002 seconds 00:04:39.442 EAL: Cannot find device (10000:00:01.0) 00:04:39.442 EAL: Failed to attach device on primary process 00:04:39.442 00:04:39.442 real 0m0.023s 00:04:39.442 user 0m0.011s 00:04:39.442 sys 0m0.011s 00:04:39.442 02:21:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.442 02:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:39.442 ************************************ 00:04:39.442 END TEST env_pci 00:04:39.442 ************************************ 00:04:39.442 02:21:19 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:39.442 02:21:19 -- env/env.sh@15 -- # uname 00:04:39.442 02:21:19 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:39.442 02:21:19 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:39.442 02:21:19 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.442 02:21:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:39.442 02:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.442 02:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:39.442 ************************************ 00:04:39.442 START TEST env_dpdk_post_init 00:04:39.442 ************************************ 00:04:39.442 02:21:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:39.442 EAL: Detected CPU lcores: 10 00:04:39.442 EAL: Detected NUMA nodes: 1 00:04:39.442 EAL: Detected shared linkage of DPDK 00:04:39.442 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.442 EAL: Selected IOVA mode 'PA' 00:04:39.700 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.700 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:39.700 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:39.700 Starting DPDK initialization... 00:04:39.700 Starting SPDK post initialization... 00:04:39.700 SPDK NVMe probe 00:04:39.700 Attaching to 0000:00:06.0 00:04:39.700 Attaching to 0000:00:07.0 00:04:39.700 Attached to 0000:00:06.0 00:04:39.700 Attached to 0000:00:07.0 00:04:39.700 Cleaning up... 00:04:39.700 00:04:39.700 real 0m0.168s 00:04:39.700 user 0m0.032s 00:04:39.700 sys 0m0.036s 00:04:39.700 02:21:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.700 02:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.700 ************************************ 00:04:39.700 END TEST env_dpdk_post_init 00:04:39.700 ************************************ 00:04:39.700 02:21:20 -- env/env.sh@26 -- # uname 00:04:39.700 02:21:20 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.700 02:21:20 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.700 02:21:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.700 02:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.700 02:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.700 ************************************ 00:04:39.700 START TEST env_mem_callbacks 00:04:39.700 ************************************ 00:04:39.700 02:21:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.700 EAL: Detected CPU lcores: 10 00:04:39.700 EAL: Detected NUMA nodes: 1 00:04:39.700 EAL: Detected shared linkage of DPDK 00:04:39.700 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.700 EAL: Selected IOVA mode 'PA' 00:04:39.700 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.700 00:04:39.700 00:04:39.700 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.700 http://cunit.sourceforge.net/ 00:04:39.700 00:04:39.700 00:04:39.700 Suite: memory 00:04:39.700 Test: test ... 00:04:39.700 register 0x200000200000 2097152 00:04:39.700 malloc 3145728 00:04:39.700 register 0x200000400000 4194304 00:04:39.700 buf 0x200000500000 len 3145728 PASSED 00:04:39.700 malloc 64 00:04:39.700 buf 0x2000004fff40 len 64 PASSED 00:04:39.700 malloc 4194304 00:04:39.700 register 0x200000800000 6291456 00:04:39.700 buf 0x200000a00000 len 4194304 PASSED 00:04:39.700 free 0x200000500000 3145728 00:04:39.700 free 0x2000004fff40 64 00:04:39.700 unregister 0x200000400000 4194304 PASSED 00:04:39.700 free 0x200000a00000 4194304 00:04:39.701 unregister 0x200000800000 6291456 PASSED 00:04:39.701 malloc 8388608 00:04:39.701 register 0x200000400000 10485760 00:04:39.701 buf 0x200000600000 len 8388608 PASSED 00:04:39.701 free 0x200000600000 8388608 00:04:39.701 unregister 0x200000400000 10485760 PASSED 00:04:39.701 passed 00:04:39.701 00:04:39.701 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.701 suites 1 1 n/a 0 0 00:04:39.701 tests 1 1 1 0 0 00:04:39.701 asserts 15 15 15 0 n/a 00:04:39.701 00:04:39.701 Elapsed time = 0.010 seconds 00:04:39.701 00:04:39.701 real 0m0.146s 00:04:39.701 user 0m0.022s 00:04:39.701 sys 0m0.023s 00:04:39.701 02:21:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.701 02:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.701 ************************************ 00:04:39.701 END TEST env_mem_callbacks 00:04:39.701 ************************************ 00:04:39.959 00:04:39.959 real 0m2.842s 00:04:39.959 user 0m1.505s 00:04:39.959 sys 0m0.973s 00:04:39.959 02:21:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.959 ************************************ 00:04:39.959 END TEST env 00:04:39.959 02:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.959 ************************************ 00:04:39.959 02:21:20 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.959 02:21:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.959 02:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.959 02:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.959 ************************************ 00:04:39.959 START TEST rpc 00:04:39.959 ************************************ 00:04:39.959 02:21:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.959 * Looking for test storage... 00:04:39.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.959 02:21:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:39.959 02:21:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:39.959 02:21:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:39.959 02:21:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:39.959 02:21:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:39.959 02:21:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:39.959 02:21:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:39.959 02:21:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:39.959 02:21:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:40.219 02:21:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.219 02:21:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:40.219 02:21:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:40.219 02:21:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:40.219 02:21:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:40.219 02:21:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:40.219 02:21:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:40.219 02:21:20 -- scripts/common.sh@344 -- # : 1 00:04:40.219 02:21:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:40.219 02:21:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.219 02:21:20 -- scripts/common.sh@364 -- # decimal 1 00:04:40.219 02:21:20 -- scripts/common.sh@352 -- # local d=1 00:04:40.219 02:21:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.219 02:21:20 -- scripts/common.sh@354 -- # echo 1 00:04:40.219 02:21:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:40.219 02:21:20 -- scripts/common.sh@365 -- # decimal 2 00:04:40.219 02:21:20 -- scripts/common.sh@352 -- # local d=2 00:04:40.219 02:21:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.219 02:21:20 -- scripts/common.sh@354 -- # echo 2 00:04:40.219 02:21:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:40.219 02:21:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:40.219 02:21:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:40.219 02:21:20 -- scripts/common.sh@367 -- # return 0 00:04:40.219 02:21:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.219 02:21:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:40.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.219 --rc genhtml_branch_coverage=1 00:04:40.219 --rc genhtml_function_coverage=1 00:04:40.219 --rc genhtml_legend=1 00:04:40.219 --rc geninfo_all_blocks=1 00:04:40.219 --rc geninfo_unexecuted_blocks=1 00:04:40.219 00:04:40.219 ' 00:04:40.219 02:21:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:40.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.219 --rc genhtml_branch_coverage=1 00:04:40.219 --rc genhtml_function_coverage=1 00:04:40.219 --rc genhtml_legend=1 00:04:40.219 --rc geninfo_all_blocks=1 00:04:40.219 --rc geninfo_unexecuted_blocks=1 00:04:40.219 00:04:40.219 ' 00:04:40.219 02:21:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:40.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.219 --rc genhtml_branch_coverage=1 00:04:40.219 --rc genhtml_function_coverage=1 00:04:40.219 --rc genhtml_legend=1 00:04:40.219 --rc geninfo_all_blocks=1 00:04:40.219 --rc geninfo_unexecuted_blocks=1 00:04:40.219 00:04:40.219 ' 00:04:40.219 02:21:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:40.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.219 --rc genhtml_branch_coverage=1 00:04:40.219 --rc genhtml_function_coverage=1 00:04:40.219 --rc genhtml_legend=1 00:04:40.219 --rc geninfo_all_blocks=1 00:04:40.219 --rc geninfo_unexecuted_blocks=1 00:04:40.219 00:04:40.219 ' 00:04:40.219 02:21:20 -- rpc/rpc.sh@65 -- # spdk_pid=55507 00:04:40.219 02:21:20 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:40.219 02:21:20 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.219 02:21:20 -- rpc/rpc.sh@67 -- # waitforlisten 55507 00:04:40.219 02:21:20 -- common/autotest_common.sh@829 -- # '[' -z 55507 ']' 00:04:40.219 02:21:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.219 02:21:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.219 02:21:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.219 02:21:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.219 02:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:40.219 [2024-11-21 02:21:20.693358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:40.219 [2024-11-21 02:21:20.693468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55507 ] 00:04:40.219 [2024-11-21 02:21:20.830500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.479 [2024-11-21 02:21:20.945889] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:40.479 [2024-11-21 02:21:20.946033] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:40.479 [2024-11-21 02:21:20.946048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 55507' to capture a snapshot of events at runtime. 00:04:40.479 [2024-11-21 02:21:20.946058] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid55507 for offline analysis/debug. 00:04:40.479 [2024-11-21 02:21:20.946102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.046 02:21:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.046 02:21:21 -- common/autotest_common.sh@862 -- # return 0 00:04:41.046 02:21:21 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.046 02:21:21 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.046 02:21:21 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:41.046 02:21:21 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:41.046 02:21:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.046 02:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.305 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.305 ************************************ 00:04:41.305 START TEST rpc_integrity 00:04:41.305 ************************************ 00:04:41.305 02:21:21 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:41.305 02:21:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.305 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.305 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.305 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.305 02:21:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.305 02:21:21 -- rpc/rpc.sh@13 -- # jq length 00:04:41.305 02:21:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.305 02:21:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.305 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.305 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.305 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.305 02:21:21 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:41.305 02:21:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.305 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.305 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.305 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.305 02:21:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.305 { 00:04:41.305 "aliases": [ 00:04:41.305 "c78adc3d-08b4-43b6-8a48-349829d326cd" 00:04:41.306 ], 00:04:41.306 "assigned_rate_limits": { 00:04:41.306 "r_mbytes_per_sec": 0, 00:04:41.306 "rw_ios_per_sec": 0, 00:04:41.306 "rw_mbytes_per_sec": 0, 00:04:41.306 "w_mbytes_per_sec": 0 00:04:41.306 }, 00:04:41.306 "block_size": 512, 00:04:41.306 "claimed": false, 00:04:41.306 "driver_specific": {}, 00:04:41.306 "memory_domains": [ 00:04:41.306 { 00:04:41.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.306 "dma_device_type": 2 00:04:41.306 } 00:04:41.306 ], 00:04:41.306 "name": "Malloc0", 00:04:41.306 "num_blocks": 16384, 00:04:41.306 "product_name": "Malloc disk", 00:04:41.306 "supported_io_types": { 00:04:41.306 "abort": true, 00:04:41.306 "compare": false, 00:04:41.306 "compare_and_write": false, 00:04:41.306 "flush": true, 00:04:41.306 "nvme_admin": false, 00:04:41.306 "nvme_io": false, 00:04:41.306 "read": true, 00:04:41.306 "reset": true, 00:04:41.306 "unmap": true, 00:04:41.306 "write": true, 00:04:41.306 "write_zeroes": true 00:04:41.306 }, 00:04:41.306 "uuid": "c78adc3d-08b4-43b6-8a48-349829d326cd", 00:04:41.306 "zoned": false 00:04:41.306 } 00:04:41.306 ]' 00:04:41.306 02:21:21 -- rpc/rpc.sh@17 -- # jq length 00:04:41.306 02:21:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.306 02:21:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:41.306 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.306 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.306 [2024-11-21 02:21:21.841960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:41.306 [2024-11-21 02:21:21.842029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.306 [2024-11-21 02:21:21.842080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b0880 00:04:41.306 [2024-11-21 02:21:21.842097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.306 [2024-11-21 02:21:21.843443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.306 [2024-11-21 02:21:21.843478] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.306 Passthru0 00:04:41.306 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.306 02:21:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.306 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.306 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.306 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.306 02:21:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.306 { 00:04:41.306 "aliases": [ 00:04:41.306 "c78adc3d-08b4-43b6-8a48-349829d326cd" 00:04:41.306 ], 00:04:41.306 "assigned_rate_limits": { 00:04:41.306 "r_mbytes_per_sec": 0, 00:04:41.306 "rw_ios_per_sec": 0, 00:04:41.306 "rw_mbytes_per_sec": 0, 00:04:41.306 "w_mbytes_per_sec": 0 00:04:41.306 }, 00:04:41.306 "block_size": 512, 00:04:41.306 "claim_type": "exclusive_write", 00:04:41.306 "claimed": true, 00:04:41.306 "driver_specific": {}, 00:04:41.306 "memory_domains": [ 00:04:41.306 { 00:04:41.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.306 "dma_device_type": 2 00:04:41.306 } 00:04:41.306 ], 00:04:41.306 "name": "Malloc0", 00:04:41.306 "num_blocks": 16384, 00:04:41.306 "product_name": "Malloc disk", 00:04:41.306 "supported_io_types": { 00:04:41.306 "abort": true, 00:04:41.306 "compare": false, 00:04:41.306 "compare_and_write": false, 00:04:41.306 "flush": true, 00:04:41.306 "nvme_admin": false, 00:04:41.306 "nvme_io": false, 00:04:41.306 "read": true, 00:04:41.306 "reset": true, 00:04:41.306 "unmap": true, 00:04:41.306 "write": true, 00:04:41.306 "write_zeroes": true 00:04:41.306 }, 00:04:41.306 "uuid": "c78adc3d-08b4-43b6-8a48-349829d326cd", 00:04:41.306 "zoned": false 00:04:41.306 }, 00:04:41.306 { 00:04:41.306 "aliases": [ 00:04:41.306 "66d78202-ca24-521c-8cbf-d2f58e82e1bb" 00:04:41.306 ], 00:04:41.306 "assigned_rate_limits": { 00:04:41.306 "r_mbytes_per_sec": 0, 00:04:41.306 "rw_ios_per_sec": 0, 00:04:41.306 "rw_mbytes_per_sec": 0, 00:04:41.306 "w_mbytes_per_sec": 0 00:04:41.306 }, 00:04:41.306 "block_size": 512, 00:04:41.306 "claimed": false, 00:04:41.306 "driver_specific": { 00:04:41.306 "passthru": { 00:04:41.306 "base_bdev_name": "Malloc0", 00:04:41.306 "name": "Passthru0" 00:04:41.306 } 00:04:41.306 }, 00:04:41.306 "memory_domains": [ 00:04:41.306 { 00:04:41.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.306 "dma_device_type": 2 00:04:41.306 } 00:04:41.306 ], 00:04:41.306 "name": "Passthru0", 00:04:41.306 "num_blocks": 16384, 00:04:41.306 "product_name": "passthru", 00:04:41.306 "supported_io_types": { 00:04:41.306 "abort": true, 00:04:41.306 "compare": false, 00:04:41.306 "compare_and_write": false, 00:04:41.306 "flush": true, 00:04:41.306 "nvme_admin": false, 00:04:41.306 "nvme_io": false, 00:04:41.306 "read": true, 00:04:41.306 "reset": true, 00:04:41.306 "unmap": true, 00:04:41.306 "write": true, 00:04:41.306 "write_zeroes": true 00:04:41.306 }, 00:04:41.306 "uuid": "66d78202-ca24-521c-8cbf-d2f58e82e1bb", 00:04:41.306 "zoned": false 00:04:41.306 } 00:04:41.306 ]' 00:04:41.306 02:21:21 -- rpc/rpc.sh@21 -- # jq length 00:04:41.306 02:21:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.306 02:21:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.306 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.306 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.306 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.306 02:21:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:41.306 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.306 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.306 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.306 02:21:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.306 02:21:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.306 02:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 02:21:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.565 02:21:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.565 02:21:21 -- rpc/rpc.sh@26 -- # jq length 00:04:41.565 02:21:22 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.565 00:04:41.565 real 0m0.301s 00:04:41.565 user 0m0.190s 00:04:41.565 sys 0m0.041s 00:04:41.565 02:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.565 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 ************************************ 00:04:41.565 END TEST rpc_integrity 00:04:41.565 ************************************ 00:04:41.565 02:21:22 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:41.565 02:21:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.565 02:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.565 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 ************************************ 00:04:41.565 START TEST rpc_plugins 00:04:41.565 ************************************ 00:04:41.565 02:21:22 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:41.565 02:21:22 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:41.565 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.565 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.565 02:21:22 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:41.565 02:21:22 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:41.565 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.565 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.565 02:21:22 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:41.565 { 00:04:41.565 "aliases": [ 00:04:41.565 "1b882a46-7945-458a-a23e-00ec4329e723" 00:04:41.565 ], 00:04:41.565 "assigned_rate_limits": { 00:04:41.565 "r_mbytes_per_sec": 0, 00:04:41.565 "rw_ios_per_sec": 0, 00:04:41.565 "rw_mbytes_per_sec": 0, 00:04:41.565 "w_mbytes_per_sec": 0 00:04:41.565 }, 00:04:41.565 "block_size": 4096, 00:04:41.565 "claimed": false, 00:04:41.565 "driver_specific": {}, 00:04:41.565 "memory_domains": [ 00:04:41.565 { 00:04:41.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.565 "dma_device_type": 2 00:04:41.565 } 00:04:41.565 ], 00:04:41.565 "name": "Malloc1", 00:04:41.565 "num_blocks": 256, 00:04:41.565 "product_name": "Malloc disk", 00:04:41.565 "supported_io_types": { 00:04:41.565 "abort": true, 00:04:41.565 "compare": false, 00:04:41.565 "compare_and_write": false, 00:04:41.565 "flush": true, 00:04:41.565 "nvme_admin": false, 00:04:41.565 "nvme_io": false, 00:04:41.565 "read": true, 00:04:41.565 "reset": true, 00:04:41.565 "unmap": true, 00:04:41.565 "write": true, 00:04:41.565 "write_zeroes": true 00:04:41.565 }, 00:04:41.565 "uuid": "1b882a46-7945-458a-a23e-00ec4329e723", 00:04:41.565 "zoned": false 00:04:41.565 } 00:04:41.565 ]' 00:04:41.565 02:21:22 -- rpc/rpc.sh@32 -- # jq length 00:04:41.565 02:21:22 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:41.565 02:21:22 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:41.565 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.565 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.565 02:21:22 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:41.565 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.565 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.565 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.565 02:21:22 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:41.565 02:21:22 -- rpc/rpc.sh@36 -- # jq length 00:04:41.824 02:21:22 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:41.824 00:04:41.824 real 0m0.159s 00:04:41.824 user 0m0.108s 00:04:41.824 sys 0m0.014s 00:04:41.824 02:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.824 ************************************ 00:04:41.824 END TEST rpc_plugins 00:04:41.824 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.824 ************************************ 00:04:41.824 02:21:22 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.824 02:21:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.824 02:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.824 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.824 ************************************ 00:04:41.824 START TEST rpc_trace_cmd_test 00:04:41.824 ************************************ 00:04:41.824 02:21:22 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:41.824 02:21:22 -- rpc/rpc.sh@40 -- # local info 00:04:41.824 02:21:22 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.824 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.824 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:41.824 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.824 02:21:22 -- rpc/rpc.sh@42 -- # info='{ 00:04:41.824 "bdev": { 00:04:41.824 "mask": "0x8", 00:04:41.824 "tpoint_mask": "0xffffffffffffffff" 00:04:41.824 }, 00:04:41.824 "bdev_nvme": { 00:04:41.824 "mask": "0x4000", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "blobfs": { 00:04:41.824 "mask": "0x80", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "dsa": { 00:04:41.824 "mask": "0x200", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "ftl": { 00:04:41.824 "mask": "0x40", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "iaa": { 00:04:41.824 "mask": "0x1000", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "iscsi_conn": { 00:04:41.824 "mask": "0x2", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "nvme_pcie": { 00:04:41.824 "mask": "0x800", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "nvme_tcp": { 00:04:41.824 "mask": "0x2000", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "nvmf_rdma": { 00:04:41.824 "mask": "0x10", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "nvmf_tcp": { 00:04:41.824 "mask": "0x20", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "scsi": { 00:04:41.824 "mask": "0x4", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "thread": { 00:04:41.824 "mask": "0x400", 00:04:41.824 "tpoint_mask": "0x0" 00:04:41.824 }, 00:04:41.824 "tpoint_group_mask": "0x8", 00:04:41.824 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid55507" 00:04:41.824 }' 00:04:41.824 02:21:22 -- rpc/rpc.sh@43 -- # jq length 00:04:41.824 02:21:22 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:41.824 02:21:22 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.824 02:21:22 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.824 02:21:22 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.824 02:21:22 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.824 02:21:22 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:42.083 02:21:22 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:42.083 02:21:22 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:42.083 02:21:22 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:42.083 00:04:42.083 real 0m0.271s 00:04:42.083 user 0m0.230s 00:04:42.083 sys 0m0.030s 00:04:42.083 02:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.083 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.083 ************************************ 00:04:42.083 END TEST rpc_trace_cmd_test 00:04:42.083 ************************************ 00:04:42.083 02:21:22 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:42.083 02:21:22 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:42.083 02:21:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.083 02:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.083 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.083 ************************************ 00:04:42.083 START TEST go_rpc 00:04:42.083 ************************************ 00:04:42.083 02:21:22 -- common/autotest_common.sh@1114 -- # go_rpc 00:04:42.083 02:21:22 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:42.083 02:21:22 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:42.083 02:21:22 -- rpc/rpc.sh@52 -- # jq length 00:04:42.083 02:21:22 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:42.083 02:21:22 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.083 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.083 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.083 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.083 02:21:22 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:42.083 02:21:22 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:42.083 02:21:22 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["e9db7942-9d71-4eff-b664-24cb13d01b92"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"e9db7942-9d71-4eff-b664-24cb13d01b92","zoned":false}]' 00:04:42.083 02:21:22 -- rpc/rpc.sh@57 -- # jq length 00:04:42.342 02:21:22 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:42.342 02:21:22 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:42.342 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.342 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.342 02:21:22 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:42.342 02:21:22 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:42.342 02:21:22 -- rpc/rpc.sh@61 -- # jq length 00:04:42.342 02:21:22 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:42.342 00:04:42.342 real 0m0.227s 00:04:42.342 user 0m0.156s 00:04:42.342 sys 0m0.036s 00:04:42.342 02:21:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.342 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 ************************************ 00:04:42.342 END TEST go_rpc 00:04:42.342 ************************************ 00:04:42.342 02:21:22 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:42.342 02:21:22 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:42.342 02:21:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.342 02:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.342 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 ************************************ 00:04:42.342 START TEST rpc_daemon_integrity 00:04:42.342 ************************************ 00:04:42.342 02:21:22 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:42.342 02:21:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.342 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.342 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.342 02:21:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.342 02:21:22 -- rpc/rpc.sh@13 -- # jq length 00:04:42.342 02:21:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.342 02:21:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.342 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.342 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.342 02:21:22 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:42.342 02:21:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.342 02:21:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.342 02:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.342 02:21:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.342 02:21:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.342 { 00:04:42.342 "aliases": [ 00:04:42.342 "90faad62-a773-41b4-91d2-73a98d3a1528" 00:04:42.342 ], 00:04:42.342 "assigned_rate_limits": { 00:04:42.342 "r_mbytes_per_sec": 0, 00:04:42.342 "rw_ios_per_sec": 0, 00:04:42.342 "rw_mbytes_per_sec": 0, 00:04:42.342 "w_mbytes_per_sec": 0 00:04:42.342 }, 00:04:42.342 "block_size": 512, 00:04:42.342 "claimed": false, 00:04:42.342 "driver_specific": {}, 00:04:42.342 "memory_domains": [ 00:04:42.342 { 00:04:42.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.342 "dma_device_type": 2 00:04:42.342 } 00:04:42.342 ], 00:04:42.342 "name": "Malloc3", 00:04:42.342 "num_blocks": 16384, 00:04:42.342 "product_name": "Malloc disk", 00:04:42.342 "supported_io_types": { 00:04:42.342 "abort": true, 00:04:42.342 "compare": false, 00:04:42.342 "compare_and_write": false, 00:04:42.342 "flush": true, 00:04:42.342 "nvme_admin": false, 00:04:42.342 "nvme_io": false, 00:04:42.342 "read": true, 00:04:42.342 "reset": true, 00:04:42.342 "unmap": true, 00:04:42.342 "write": true, 00:04:42.342 "write_zeroes": true 00:04:42.342 }, 00:04:42.342 "uuid": "90faad62-a773-41b4-91d2-73a98d3a1528", 00:04:42.342 "zoned": false 00:04:42.342 } 00:04:42.342 ]' 00:04:42.342 02:21:22 -- rpc/rpc.sh@17 -- # jq length 00:04:42.602 02:21:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.602 02:21:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:42.602 02:21:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.602 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.602 [2024-11-21 02:21:23.022390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:42.602 [2024-11-21 02:21:23.022429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.602 [2024-11-21 02:21:23.022446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13a1680 00:04:42.602 [2024-11-21 02:21:23.022455] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.602 [2024-11-21 02:21:23.023631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.602 [2024-11-21 02:21:23.023659] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.602 Passthru0 00:04:42.602 02:21:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.602 02:21:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.602 02:21:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.602 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.602 02:21:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.602 02:21:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.602 { 00:04:42.602 "aliases": [ 00:04:42.602 "90faad62-a773-41b4-91d2-73a98d3a1528" 00:04:42.602 ], 00:04:42.602 "assigned_rate_limits": { 00:04:42.602 "r_mbytes_per_sec": 0, 00:04:42.602 "rw_ios_per_sec": 0, 00:04:42.602 "rw_mbytes_per_sec": 0, 00:04:42.602 "w_mbytes_per_sec": 0 00:04:42.602 }, 00:04:42.602 "block_size": 512, 00:04:42.602 "claim_type": "exclusive_write", 00:04:42.602 "claimed": true, 00:04:42.602 "driver_specific": {}, 00:04:42.602 "memory_domains": [ 00:04:42.602 { 00:04:42.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.602 "dma_device_type": 2 00:04:42.602 } 00:04:42.602 ], 00:04:42.602 "name": "Malloc3", 00:04:42.602 "num_blocks": 16384, 00:04:42.602 "product_name": "Malloc disk", 00:04:42.602 "supported_io_types": { 00:04:42.602 "abort": true, 00:04:42.602 "compare": false, 00:04:42.602 "compare_and_write": false, 00:04:42.602 "flush": true, 00:04:42.602 "nvme_admin": false, 00:04:42.602 "nvme_io": false, 00:04:42.602 "read": true, 00:04:42.602 "reset": true, 00:04:42.602 "unmap": true, 00:04:42.602 "write": true, 00:04:42.602 "write_zeroes": true 00:04:42.602 }, 00:04:42.602 "uuid": "90faad62-a773-41b4-91d2-73a98d3a1528", 00:04:42.602 "zoned": false 00:04:42.602 }, 00:04:42.602 { 00:04:42.602 "aliases": [ 00:04:42.602 "1db1bbfc-8e86-5b6d-9a49-2f3b276fb242" 00:04:42.602 ], 00:04:42.602 "assigned_rate_limits": { 00:04:42.602 "r_mbytes_per_sec": 0, 00:04:42.602 "rw_ios_per_sec": 0, 00:04:42.602 "rw_mbytes_per_sec": 0, 00:04:42.602 "w_mbytes_per_sec": 0 00:04:42.602 }, 00:04:42.602 "block_size": 512, 00:04:42.602 "claimed": false, 00:04:42.602 "driver_specific": { 00:04:42.602 "passthru": { 00:04:42.602 "base_bdev_name": "Malloc3", 00:04:42.602 "name": "Passthru0" 00:04:42.602 } 00:04:42.602 }, 00:04:42.602 "memory_domains": [ 00:04:42.602 { 00:04:42.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.602 "dma_device_type": 2 00:04:42.602 } 00:04:42.602 ], 00:04:42.602 "name": "Passthru0", 00:04:42.602 "num_blocks": 16384, 00:04:42.602 "product_name": "passthru", 00:04:42.602 "supported_io_types": { 00:04:42.602 "abort": true, 00:04:42.602 "compare": false, 00:04:42.602 "compare_and_write": false, 00:04:42.602 "flush": true, 00:04:42.602 "nvme_admin": false, 00:04:42.602 "nvme_io": false, 00:04:42.602 "read": true, 00:04:42.602 "reset": true, 00:04:42.602 "unmap": true, 00:04:42.602 "write": true, 00:04:42.602 "write_zeroes": true 00:04:42.602 }, 00:04:42.602 "uuid": "1db1bbfc-8e86-5b6d-9a49-2f3b276fb242", 00:04:42.602 "zoned": false 00:04:42.602 } 00:04:42.602 ]' 00:04:42.602 02:21:23 -- rpc/rpc.sh@21 -- # jq length 00:04:42.602 02:21:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.602 02:21:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.602 02:21:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.602 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.602 02:21:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.602 02:21:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:42.602 02:21:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.602 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.602 02:21:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.602 02:21:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.602 02:21:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.602 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.602 02:21:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.602 02:21:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.602 02:21:23 -- rpc/rpc.sh@26 -- # jq length 00:04:42.602 02:21:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.602 00:04:42.602 real 0m0.317s 00:04:42.602 user 0m0.208s 00:04:42.602 sys 0m0.035s 00:04:42.602 02:21:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:42.602 ************************************ 00:04:42.602 END TEST rpc_daemon_integrity 00:04:42.602 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.602 ************************************ 00:04:42.602 02:21:23 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:42.602 02:21:23 -- rpc/rpc.sh@84 -- # killprocess 55507 00:04:42.602 02:21:23 -- common/autotest_common.sh@936 -- # '[' -z 55507 ']' 00:04:42.602 02:21:23 -- common/autotest_common.sh@940 -- # kill -0 55507 00:04:42.602 02:21:23 -- common/autotest_common.sh@941 -- # uname 00:04:42.602 02:21:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:42.602 02:21:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55507 00:04:42.863 02:21:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:42.863 02:21:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:42.863 killing process with pid 55507 00:04:42.863 02:21:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55507' 00:04:42.863 02:21:23 -- common/autotest_common.sh@955 -- # kill 55507 00:04:42.863 02:21:23 -- common/autotest_common.sh@960 -- # wait 55507 00:04:43.431 00:04:43.431 real 0m3.371s 00:04:43.431 user 0m4.260s 00:04:43.431 sys 0m0.875s 00:04:43.431 02:21:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.431 ************************************ 00:04:43.431 END TEST rpc 00:04:43.431 ************************************ 00:04:43.431 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:43.431 02:21:23 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.431 02:21:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.431 02:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.431 02:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:43.431 ************************************ 00:04:43.431 START TEST rpc_client 00:04:43.431 ************************************ 00:04:43.431 02:21:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.431 * Looking for test storage... 00:04:43.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:43.431 02:21:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.431 02:21:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.431 02:21:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.431 02:21:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.431 02:21:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.431 02:21:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.431 02:21:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.431 02:21:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.431 02:21:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.431 02:21:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.431 02:21:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.431 02:21:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.431 02:21:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.431 02:21:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.431 02:21:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.431 02:21:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.431 02:21:24 -- scripts/common.sh@344 -- # : 1 00:04:43.431 02:21:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.431 02:21:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.431 02:21:24 -- scripts/common.sh@364 -- # decimal 1 00:04:43.431 02:21:24 -- scripts/common.sh@352 -- # local d=1 00:04:43.431 02:21:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.431 02:21:24 -- scripts/common.sh@354 -- # echo 1 00:04:43.431 02:21:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.431 02:21:24 -- scripts/common.sh@365 -- # decimal 2 00:04:43.431 02:21:24 -- scripts/common.sh@352 -- # local d=2 00:04:43.431 02:21:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.431 02:21:24 -- scripts/common.sh@354 -- # echo 2 00:04:43.431 02:21:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.431 02:21:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.431 02:21:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.431 02:21:24 -- scripts/common.sh@367 -- # return 0 00:04:43.431 02:21:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.431 02:21:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.431 --rc genhtml_branch_coverage=1 00:04:43.431 --rc genhtml_function_coverage=1 00:04:43.431 --rc genhtml_legend=1 00:04:43.431 --rc geninfo_all_blocks=1 00:04:43.431 --rc geninfo_unexecuted_blocks=1 00:04:43.431 00:04:43.431 ' 00:04:43.431 02:21:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.431 --rc genhtml_branch_coverage=1 00:04:43.431 --rc genhtml_function_coverage=1 00:04:43.431 --rc genhtml_legend=1 00:04:43.431 --rc geninfo_all_blocks=1 00:04:43.431 --rc geninfo_unexecuted_blocks=1 00:04:43.431 00:04:43.431 ' 00:04:43.431 02:21:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.431 --rc genhtml_branch_coverage=1 00:04:43.431 --rc genhtml_function_coverage=1 00:04:43.431 --rc genhtml_legend=1 00:04:43.431 --rc geninfo_all_blocks=1 00:04:43.431 --rc geninfo_unexecuted_blocks=1 00:04:43.431 00:04:43.431 ' 00:04:43.431 02:21:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.431 --rc genhtml_branch_coverage=1 00:04:43.431 --rc genhtml_function_coverage=1 00:04:43.431 --rc genhtml_legend=1 00:04:43.431 --rc geninfo_all_blocks=1 00:04:43.431 --rc geninfo_unexecuted_blocks=1 00:04:43.431 00:04:43.431 ' 00:04:43.431 02:21:24 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:43.431 OK 00:04:43.431 02:21:24 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.431 00:04:43.431 real 0m0.206s 00:04:43.431 user 0m0.123s 00:04:43.431 sys 0m0.093s 00:04:43.431 02:21:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.431 02:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.431 ************************************ 00:04:43.431 END TEST rpc_client 00:04:43.431 ************************************ 00:04:43.690 02:21:24 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.690 02:21:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.690 02:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.690 02:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.690 ************************************ 00:04:43.690 START TEST json_config 00:04:43.690 ************************************ 00:04:43.690 02:21:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.691 02:21:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.691 02:21:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.691 02:21:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.691 02:21:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.691 02:21:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.691 02:21:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.691 02:21:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.691 02:21:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.691 02:21:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.691 02:21:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.691 02:21:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.691 02:21:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.691 02:21:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.691 02:21:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.691 02:21:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.691 02:21:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.691 02:21:24 -- scripts/common.sh@344 -- # : 1 00:04:43.691 02:21:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.691 02:21:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.691 02:21:24 -- scripts/common.sh@364 -- # decimal 1 00:04:43.691 02:21:24 -- scripts/common.sh@352 -- # local d=1 00:04:43.691 02:21:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.691 02:21:24 -- scripts/common.sh@354 -- # echo 1 00:04:43.691 02:21:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.691 02:21:24 -- scripts/common.sh@365 -- # decimal 2 00:04:43.691 02:21:24 -- scripts/common.sh@352 -- # local d=2 00:04:43.691 02:21:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.691 02:21:24 -- scripts/common.sh@354 -- # echo 2 00:04:43.691 02:21:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.691 02:21:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.691 02:21:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.691 02:21:24 -- scripts/common.sh@367 -- # return 0 00:04:43.691 02:21:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.691 02:21:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.691 --rc genhtml_branch_coverage=1 00:04:43.691 --rc genhtml_function_coverage=1 00:04:43.691 --rc genhtml_legend=1 00:04:43.691 --rc geninfo_all_blocks=1 00:04:43.691 --rc geninfo_unexecuted_blocks=1 00:04:43.691 00:04:43.691 ' 00:04:43.691 02:21:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.691 --rc genhtml_branch_coverage=1 00:04:43.691 --rc genhtml_function_coverage=1 00:04:43.691 --rc genhtml_legend=1 00:04:43.691 --rc geninfo_all_blocks=1 00:04:43.691 --rc geninfo_unexecuted_blocks=1 00:04:43.691 00:04:43.691 ' 00:04:43.691 02:21:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.691 --rc genhtml_branch_coverage=1 00:04:43.691 --rc genhtml_function_coverage=1 00:04:43.691 --rc genhtml_legend=1 00:04:43.691 --rc geninfo_all_blocks=1 00:04:43.691 --rc geninfo_unexecuted_blocks=1 00:04:43.691 00:04:43.691 ' 00:04:43.691 02:21:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.691 --rc genhtml_branch_coverage=1 00:04:43.691 --rc genhtml_function_coverage=1 00:04:43.691 --rc genhtml_legend=1 00:04:43.691 --rc geninfo_all_blocks=1 00:04:43.691 --rc geninfo_unexecuted_blocks=1 00:04:43.691 00:04:43.691 ' 00:04:43.691 02:21:24 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.691 02:21:24 -- nvmf/common.sh@7 -- # uname -s 00:04:43.691 02:21:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.691 02:21:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.691 02:21:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.691 02:21:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.691 02:21:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.691 02:21:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.691 02:21:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.691 02:21:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.691 02:21:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.691 02:21:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.691 02:21:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:04:43.691 02:21:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:04:43.691 02:21:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.691 02:21:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.691 02:21:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.691 02:21:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.691 02:21:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.691 02:21:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.691 02:21:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.691 02:21:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.691 02:21:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.691 02:21:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.691 02:21:24 -- paths/export.sh@5 -- # export PATH 00:04:43.691 02:21:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.691 02:21:24 -- nvmf/common.sh@46 -- # : 0 00:04:43.691 02:21:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:43.691 02:21:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:43.691 02:21:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:43.691 02:21:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.691 02:21:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.691 02:21:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:43.691 02:21:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:43.691 02:21:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:43.691 02:21:24 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.691 02:21:24 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:43.691 02:21:24 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:43.691 INFO: JSON configuration test init 00:04:43.691 02:21:24 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:43.691 02:21:24 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:43.691 02:21:24 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:43.691 02:21:24 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:43.691 02:21:24 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:43.691 02:21:24 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:43.691 02:21:24 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:43.691 02:21:24 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.691 02:21:24 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:43.691 02:21:24 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:43.691 02:21:24 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:43.691 02:21:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.691 02:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.691 02:21:24 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:43.691 02:21:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.691 02:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.691 Waiting for target to run... 00:04:43.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.691 02:21:24 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:43.691 02:21:24 -- json_config/json_config.sh@98 -- # local app=target 00:04:43.691 02:21:24 -- json_config/json_config.sh@99 -- # shift 00:04:43.691 02:21:24 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:43.691 02:21:24 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:43.691 02:21:24 -- json_config/json_config.sh@111 -- # app_pid[$app]=55834 00:04:43.691 02:21:24 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:43.691 02:21:24 -- json_config/json_config.sh@114 -- # waitforlisten 55834 /var/tmp/spdk_tgt.sock 00:04:43.691 02:21:24 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:43.691 02:21:24 -- common/autotest_common.sh@829 -- # '[' -z 55834 ']' 00:04:43.691 02:21:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.692 02:21:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.692 02:21:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.692 02:21:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.692 02:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.951 [2024-11-21 02:21:24.342939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.951 [2024-11-21 02:21:24.343028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55834 ] 00:04:44.210 [2024-11-21 02:21:24.732686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.210 [2024-11-21 02:21:24.833489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.210 [2024-11-21 02:21:24.833662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.777 00:04:44.777 02:21:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.777 02:21:25 -- common/autotest_common.sh@862 -- # return 0 00:04:44.777 02:21:25 -- json_config/json_config.sh@115 -- # echo '' 00:04:44.777 02:21:25 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:44.777 02:21:25 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:44.777 02:21:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.777 02:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:44.777 02:21:25 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:44.777 02:21:25 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:44.777 02:21:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.777 02:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:44.777 02:21:25 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:44.777 02:21:25 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:44.777 02:21:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:45.344 02:21:25 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:45.344 02:21:25 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:45.344 02:21:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.344 02:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:45.344 02:21:25 -- json_config/json_config.sh@48 -- # local ret=0 00:04:45.344 02:21:25 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:45.344 02:21:25 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:45.344 02:21:25 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:45.344 02:21:25 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:45.344 02:21:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:45.603 02:21:26 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:45.603 02:21:26 -- json_config/json_config.sh@51 -- # local get_types 00:04:45.603 02:21:26 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:45.603 02:21:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.603 02:21:26 -- common/autotest_common.sh@10 -- # set +x 00:04:45.603 02:21:26 -- json_config/json_config.sh@58 -- # return 0 00:04:45.603 02:21:26 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:45.603 02:21:26 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:45.603 02:21:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.603 02:21:26 -- common/autotest_common.sh@10 -- # set +x 00:04:45.603 02:21:26 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:45.603 02:21:26 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:45.603 02:21:26 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.603 02:21:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.862 MallocForNvmf0 00:04:45.862 02:21:26 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.862 02:21:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:46.120 MallocForNvmf1 00:04:46.120 02:21:26 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.120 02:21:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.378 [2024-11-21 02:21:26.946503] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.378 02:21:26 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.378 02:21:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.637 02:21:27 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.637 02:21:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.908 02:21:27 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.908 02:21:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.176 02:21:27 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.176 02:21:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.435 [2024-11-21 02:21:27.863025] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.435 02:21:27 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:47.435 02:21:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.435 02:21:27 -- common/autotest_common.sh@10 -- # set +x 00:04:47.435 02:21:27 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:47.435 02:21:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.435 02:21:27 -- common/autotest_common.sh@10 -- # set +x 00:04:47.435 02:21:27 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:47.435 02:21:27 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.435 02:21:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.694 MallocBdevForConfigChangeCheck 00:04:47.694 02:21:28 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:47.694 02:21:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:47.694 02:21:28 -- common/autotest_common.sh@10 -- # set +x 00:04:47.694 02:21:28 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:47.694 02:21:28 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.952 INFO: shutting down applications... 00:04:47.952 02:21:28 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:47.952 02:21:28 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:47.952 02:21:28 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:47.952 02:21:28 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:47.952 02:21:28 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:48.211 Calling clear_iscsi_subsystem 00:04:48.211 Calling clear_nvmf_subsystem 00:04:48.211 Calling clear_nbd_subsystem 00:04:48.211 Calling clear_ublk_subsystem 00:04:48.211 Calling clear_vhost_blk_subsystem 00:04:48.211 Calling clear_vhost_scsi_subsystem 00:04:48.211 Calling clear_scheduler_subsystem 00:04:48.211 Calling clear_bdev_subsystem 00:04:48.211 Calling clear_accel_subsystem 00:04:48.211 Calling clear_vmd_subsystem 00:04:48.211 Calling clear_sock_subsystem 00:04:48.211 Calling clear_iobuf_subsystem 00:04:48.469 02:21:28 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:48.469 02:21:28 -- json_config/json_config.sh@396 -- # count=100 00:04:48.469 02:21:28 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:48.470 02:21:28 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.470 02:21:28 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:48.470 02:21:28 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:48.728 02:21:29 -- json_config/json_config.sh@398 -- # break 00:04:48.728 02:21:29 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:48.728 02:21:29 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:48.728 02:21:29 -- json_config/json_config.sh@120 -- # local app=target 00:04:48.728 02:21:29 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:48.728 02:21:29 -- json_config/json_config.sh@124 -- # [[ -n 55834 ]] 00:04:48.728 02:21:29 -- json_config/json_config.sh@127 -- # kill -SIGINT 55834 00:04:48.728 02:21:29 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:48.728 02:21:29 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:48.728 02:21:29 -- json_config/json_config.sh@130 -- # kill -0 55834 00:04:48.728 02:21:29 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:49.296 02:21:29 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:49.296 02:21:29 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:49.296 02:21:29 -- json_config/json_config.sh@130 -- # kill -0 55834 00:04:49.296 02:21:29 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:49.296 02:21:29 -- json_config/json_config.sh@132 -- # break 00:04:49.296 02:21:29 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:49.296 SPDK target shutdown done 00:04:49.296 02:21:29 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:49.296 INFO: relaunching applications... 00:04:49.296 02:21:29 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:49.296 02:21:29 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.296 02:21:29 -- json_config/json_config.sh@98 -- # local app=target 00:04:49.296 02:21:29 -- json_config/json_config.sh@99 -- # shift 00:04:49.296 02:21:29 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:49.296 02:21:29 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:49.296 02:21:29 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:49.296 02:21:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:49.296 02:21:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:49.296 02:21:29 -- json_config/json_config.sh@111 -- # app_pid[$app]=56103 00:04:49.296 02:21:29 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.296 Waiting for target to run... 00:04:49.296 02:21:29 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:49.296 02:21:29 -- json_config/json_config.sh@114 -- # waitforlisten 56103 /var/tmp/spdk_tgt.sock 00:04:49.296 02:21:29 -- common/autotest_common.sh@829 -- # '[' -z 56103 ']' 00:04:49.296 02:21:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.296 02:21:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.296 02:21:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.296 02:21:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.296 02:21:29 -- common/autotest_common.sh@10 -- # set +x 00:04:49.296 [2024-11-21 02:21:29.834983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.296 [2024-11-21 02:21:29.835111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56103 ] 00:04:49.865 [2024-11-21 02:21:30.260694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.865 [2024-11-21 02:21:30.342166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.865 [2024-11-21 02:21:30.342362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.123 [2024-11-21 02:21:30.644384] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:50.123 [2024-11-21 02:21:30.676463] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.123 02:21:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.123 02:21:30 -- common/autotest_common.sh@862 -- # return 0 00:04:50.123 00:04:50.123 02:21:30 -- json_config/json_config.sh@115 -- # echo '' 00:04:50.123 02:21:30 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:50.123 INFO: Checking if target configuration is the same... 00:04:50.123 02:21:30 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:50.123 02:21:30 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.123 02:21:30 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:50.123 02:21:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.123 + '[' 2 -ne 2 ']' 00:04:50.124 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:50.124 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:50.124 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:50.124 +++ basename /dev/fd/62 00:04:50.124 ++ mktemp /tmp/62.XXX 00:04:50.124 + tmp_file_1=/tmp/62.D5g 00:04:50.124 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.124 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.124 + tmp_file_2=/tmp/spdk_tgt_config.json.lro 00:04:50.124 + ret=0 00:04:50.124 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.691 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.691 + diff -u /tmp/62.D5g /tmp/spdk_tgt_config.json.lro 00:04:50.691 INFO: JSON config files are the same 00:04:50.691 + echo 'INFO: JSON config files are the same' 00:04:50.691 + rm /tmp/62.D5g /tmp/spdk_tgt_config.json.lro 00:04:50.691 + exit 0 00:04:50.691 02:21:31 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:50.691 INFO: changing configuration and checking if this can be detected... 00:04:50.691 02:21:31 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:50.691 02:21:31 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.691 02:21:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.951 02:21:31 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.951 02:21:31 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:50.951 02:21:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.951 + '[' 2 -ne 2 ']' 00:04:50.951 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:50.951 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:50.951 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:50.951 +++ basename /dev/fd/62 00:04:50.951 ++ mktemp /tmp/62.XXX 00:04:50.951 + tmp_file_1=/tmp/62.LIC 00:04:50.951 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.951 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.951 + tmp_file_2=/tmp/spdk_tgt_config.json.Km2 00:04:50.951 + ret=0 00:04:50.951 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:51.210 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:51.469 + diff -u /tmp/62.LIC /tmp/spdk_tgt_config.json.Km2 00:04:51.469 + ret=1 00:04:51.469 + echo '=== Start of file: /tmp/62.LIC ===' 00:04:51.469 + cat /tmp/62.LIC 00:04:51.469 + echo '=== End of file: /tmp/62.LIC ===' 00:04:51.469 + echo '' 00:04:51.469 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Km2 ===' 00:04:51.469 + cat /tmp/spdk_tgt_config.json.Km2 00:04:51.469 + echo '=== End of file: /tmp/spdk_tgt_config.json.Km2 ===' 00:04:51.469 + echo '' 00:04:51.469 + rm /tmp/62.LIC /tmp/spdk_tgt_config.json.Km2 00:04:51.469 + exit 1 00:04:51.469 INFO: configuration change detected. 00:04:51.469 02:21:31 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:51.469 02:21:31 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:51.469 02:21:31 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:51.469 02:21:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.469 02:21:31 -- common/autotest_common.sh@10 -- # set +x 00:04:51.469 02:21:31 -- json_config/json_config.sh@360 -- # local ret=0 00:04:51.469 02:21:31 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:51.469 02:21:31 -- json_config/json_config.sh@370 -- # [[ -n 56103 ]] 00:04:51.469 02:21:31 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:51.469 02:21:31 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:51.469 02:21:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:51.469 02:21:31 -- common/autotest_common.sh@10 -- # set +x 00:04:51.469 02:21:31 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:51.469 02:21:31 -- json_config/json_config.sh@246 -- # uname -s 00:04:51.469 02:21:31 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:51.469 02:21:31 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:51.469 02:21:31 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:51.469 02:21:31 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:51.469 02:21:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.469 02:21:31 -- common/autotest_common.sh@10 -- # set +x 00:04:51.469 02:21:31 -- json_config/json_config.sh@376 -- # killprocess 56103 00:04:51.469 02:21:31 -- common/autotest_common.sh@936 -- # '[' -z 56103 ']' 00:04:51.469 02:21:31 -- common/autotest_common.sh@940 -- # kill -0 56103 00:04:51.469 02:21:31 -- common/autotest_common.sh@941 -- # uname 00:04:51.469 02:21:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.469 02:21:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56103 00:04:51.469 02:21:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.469 02:21:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.469 killing process with pid 56103 00:04:51.469 02:21:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56103' 00:04:51.469 02:21:32 -- common/autotest_common.sh@955 -- # kill 56103 00:04:51.469 02:21:32 -- common/autotest_common.sh@960 -- # wait 56103 00:04:51.728 02:21:32 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.728 02:21:32 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:51.728 02:21:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.728 02:21:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.728 02:21:32 -- json_config/json_config.sh@381 -- # return 0 00:04:51.728 INFO: Success 00:04:51.728 02:21:32 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:51.728 00:04:51.728 real 0m8.201s 00:04:51.728 user 0m11.491s 00:04:51.728 sys 0m1.882s 00:04:51.728 02:21:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.728 02:21:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.728 ************************************ 00:04:51.728 END TEST json_config 00:04:51.728 ************************************ 00:04:51.728 02:21:32 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.728 02:21:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.728 02:21:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.728 02:21:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.728 ************************************ 00:04:51.728 START TEST json_config_extra_key 00:04:51.728 ************************************ 00:04:51.728 02:21:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.987 02:21:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.988 02:21:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.988 02:21:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.988 02:21:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.988 02:21:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.988 02:21:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.988 02:21:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.988 02:21:32 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.988 02:21:32 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.988 02:21:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.988 02:21:32 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.988 02:21:32 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.988 02:21:32 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.988 02:21:32 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.988 02:21:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.988 02:21:32 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.988 02:21:32 -- scripts/common.sh@344 -- # : 1 00:04:51.988 02:21:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.988 02:21:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.988 02:21:32 -- scripts/common.sh@364 -- # decimal 1 00:04:51.988 02:21:32 -- scripts/common.sh@352 -- # local d=1 00:04:51.988 02:21:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.988 02:21:32 -- scripts/common.sh@354 -- # echo 1 00:04:51.988 02:21:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.988 02:21:32 -- scripts/common.sh@365 -- # decimal 2 00:04:51.988 02:21:32 -- scripts/common.sh@352 -- # local d=2 00:04:51.988 02:21:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.988 02:21:32 -- scripts/common.sh@354 -- # echo 2 00:04:51.988 02:21:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.988 02:21:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.988 02:21:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.988 02:21:32 -- scripts/common.sh@367 -- # return 0 00:04:51.988 02:21:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.988 02:21:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.988 --rc genhtml_branch_coverage=1 00:04:51.988 --rc genhtml_function_coverage=1 00:04:51.988 --rc genhtml_legend=1 00:04:51.988 --rc geninfo_all_blocks=1 00:04:51.988 --rc geninfo_unexecuted_blocks=1 00:04:51.988 00:04:51.988 ' 00:04:51.988 02:21:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.988 --rc genhtml_branch_coverage=1 00:04:51.988 --rc genhtml_function_coverage=1 00:04:51.988 --rc genhtml_legend=1 00:04:51.988 --rc geninfo_all_blocks=1 00:04:51.988 --rc geninfo_unexecuted_blocks=1 00:04:51.988 00:04:51.988 ' 00:04:51.988 02:21:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.988 --rc genhtml_branch_coverage=1 00:04:51.988 --rc genhtml_function_coverage=1 00:04:51.988 --rc genhtml_legend=1 00:04:51.988 --rc geninfo_all_blocks=1 00:04:51.988 --rc geninfo_unexecuted_blocks=1 00:04:51.988 00:04:51.988 ' 00:04:51.988 02:21:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.988 --rc genhtml_branch_coverage=1 00:04:51.988 --rc genhtml_function_coverage=1 00:04:51.988 --rc genhtml_legend=1 00:04:51.988 --rc geninfo_all_blocks=1 00:04:51.988 --rc geninfo_unexecuted_blocks=1 00:04:51.988 00:04:51.988 ' 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.988 02:21:32 -- nvmf/common.sh@7 -- # uname -s 00:04:51.988 02:21:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.988 02:21:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.988 02:21:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.988 02:21:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.988 02:21:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.988 02:21:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.988 02:21:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.988 02:21:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.988 02:21:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.988 02:21:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.988 02:21:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:04:51.988 02:21:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:04:51.988 02:21:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.988 02:21:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.988 02:21:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.988 02:21:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.988 02:21:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.988 02:21:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.988 02:21:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.988 02:21:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.988 02:21:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.988 02:21:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.988 02:21:32 -- paths/export.sh@5 -- # export PATH 00:04:51.988 02:21:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.988 02:21:32 -- nvmf/common.sh@46 -- # : 0 00:04:51.988 02:21:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:51.988 02:21:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:51.988 02:21:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:51.988 02:21:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.988 02:21:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.988 02:21:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:51.988 02:21:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:51.988 02:21:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:51.988 INFO: launching applications... 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56286 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:51.988 Waiting for target to run... 00:04:51.988 02:21:32 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56286 /var/tmp/spdk_tgt.sock 00:04:51.988 02:21:32 -- common/autotest_common.sh@829 -- # '[' -z 56286 ']' 00:04:51.988 02:21:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.988 02:21:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.988 02:21:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.988 02:21:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.988 02:21:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.988 [2024-11-21 02:21:32.606703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:51.988 [2024-11-21 02:21:32.606852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56286 ] 00:04:52.555 [2024-11-21 02:21:33.025138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.555 [2024-11-21 02:21:33.091938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.555 [2024-11-21 02:21:33.092119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.124 00:04:53.124 INFO: shutting down applications... 00:04:53.124 02:21:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.124 02:21:33 -- common/autotest_common.sh@862 -- # return 0 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56286 ]] 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56286 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56286 00:04:53.124 02:21:33 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:53.690 02:21:34 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:53.690 02:21:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:53.690 02:21:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56286 00:04:53.690 02:21:34 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:54.257 02:21:34 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:54.257 02:21:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:54.257 02:21:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56286 00:04:54.257 02:21:34 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:54.257 02:21:34 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:54.257 02:21:34 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:54.257 SPDK target shutdown done 00:04:54.258 02:21:34 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:54.258 Success 00:04:54.258 02:21:34 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:54.258 00:04:54.258 real 0m2.263s 00:04:54.258 user 0m1.867s 00:04:54.258 sys 0m0.447s 00:04:54.258 02:21:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.258 ************************************ 00:04:54.258 END TEST json_config_extra_key 00:04:54.258 ************************************ 00:04:54.258 02:21:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.258 02:21:34 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.258 02:21:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.258 02:21:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.258 02:21:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.258 ************************************ 00:04:54.258 START TEST alias_rpc 00:04:54.258 ************************************ 00:04:54.258 02:21:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.258 * Looking for test storage... 00:04:54.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:54.258 02:21:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:54.258 02:21:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:54.258 02:21:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:54.258 02:21:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:54.258 02:21:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:54.258 02:21:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:54.258 02:21:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:54.258 02:21:34 -- scripts/common.sh@335 -- # IFS=.-: 00:04:54.258 02:21:34 -- scripts/common.sh@335 -- # read -ra ver1 00:04:54.258 02:21:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.258 02:21:34 -- scripts/common.sh@336 -- # read -ra ver2 00:04:54.258 02:21:34 -- scripts/common.sh@337 -- # local 'op=<' 00:04:54.258 02:21:34 -- scripts/common.sh@339 -- # ver1_l=2 00:04:54.258 02:21:34 -- scripts/common.sh@340 -- # ver2_l=1 00:04:54.258 02:21:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:54.258 02:21:34 -- scripts/common.sh@343 -- # case "$op" in 00:04:54.258 02:21:34 -- scripts/common.sh@344 -- # : 1 00:04:54.258 02:21:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:54.258 02:21:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.258 02:21:34 -- scripts/common.sh@364 -- # decimal 1 00:04:54.258 02:21:34 -- scripts/common.sh@352 -- # local d=1 00:04:54.258 02:21:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.258 02:21:34 -- scripts/common.sh@354 -- # echo 1 00:04:54.258 02:21:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:54.258 02:21:34 -- scripts/common.sh@365 -- # decimal 2 00:04:54.258 02:21:34 -- scripts/common.sh@352 -- # local d=2 00:04:54.258 02:21:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.258 02:21:34 -- scripts/common.sh@354 -- # echo 2 00:04:54.258 02:21:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:54.258 02:21:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:54.258 02:21:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:54.258 02:21:34 -- scripts/common.sh@367 -- # return 0 00:04:54.258 02:21:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.258 02:21:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:54.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.258 --rc genhtml_branch_coverage=1 00:04:54.258 --rc genhtml_function_coverage=1 00:04:54.258 --rc genhtml_legend=1 00:04:54.258 --rc geninfo_all_blocks=1 00:04:54.258 --rc geninfo_unexecuted_blocks=1 00:04:54.258 00:04:54.258 ' 00:04:54.258 02:21:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:54.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.258 --rc genhtml_branch_coverage=1 00:04:54.258 --rc genhtml_function_coverage=1 00:04:54.258 --rc genhtml_legend=1 00:04:54.258 --rc geninfo_all_blocks=1 00:04:54.258 --rc geninfo_unexecuted_blocks=1 00:04:54.258 00:04:54.258 ' 00:04:54.258 02:21:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:54.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.258 --rc genhtml_branch_coverage=1 00:04:54.258 --rc genhtml_function_coverage=1 00:04:54.258 --rc genhtml_legend=1 00:04:54.258 --rc geninfo_all_blocks=1 00:04:54.258 --rc geninfo_unexecuted_blocks=1 00:04:54.258 00:04:54.258 ' 00:04:54.258 02:21:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:54.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.258 --rc genhtml_branch_coverage=1 00:04:54.258 --rc genhtml_function_coverage=1 00:04:54.258 --rc genhtml_legend=1 00:04:54.258 --rc geninfo_all_blocks=1 00:04:54.258 --rc geninfo_unexecuted_blocks=1 00:04:54.258 00:04:54.258 ' 00:04:54.258 02:21:34 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.258 02:21:34 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56376 00:04:54.258 02:21:34 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.258 02:21:34 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56376 00:04:54.258 02:21:34 -- common/autotest_common.sh@829 -- # '[' -z 56376 ']' 00:04:54.258 02:21:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.258 02:21:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.258 02:21:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.258 02:21:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.258 02:21:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.517 [2024-11-21 02:21:34.917429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:54.517 [2024-11-21 02:21:34.917533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56376 ] 00:04:54.517 [2024-11-21 02:21:35.040286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.517 [2024-11-21 02:21:35.122488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.517 [2024-11-21 02:21:35.122682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.454 02:21:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.454 02:21:35 -- common/autotest_common.sh@862 -- # return 0 00:04:55.454 02:21:35 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.713 02:21:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56376 00:04:55.713 02:21:36 -- common/autotest_common.sh@936 -- # '[' -z 56376 ']' 00:04:55.713 02:21:36 -- common/autotest_common.sh@940 -- # kill -0 56376 00:04:55.713 02:21:36 -- common/autotest_common.sh@941 -- # uname 00:04:55.713 02:21:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:55.713 02:21:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56376 00:04:55.713 02:21:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:55.713 02:21:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:55.713 killing process with pid 56376 00:04:55.713 02:21:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56376' 00:04:55.713 02:21:36 -- common/autotest_common.sh@955 -- # kill 56376 00:04:55.713 02:21:36 -- common/autotest_common.sh@960 -- # wait 56376 00:04:56.280 00:04:56.280 real 0m2.151s 00:04:56.280 user 0m2.393s 00:04:56.280 sys 0m0.538s 00:04:56.280 02:21:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.280 02:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.280 ************************************ 00:04:56.280 END TEST alias_rpc 00:04:56.280 ************************************ 00:04:56.280 02:21:36 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:04:56.280 02:21:36 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.280 02:21:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.280 02:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.280 02:21:36 -- common/autotest_common.sh@10 -- # set +x 00:04:56.280 ************************************ 00:04:56.280 START TEST dpdk_mem_utility 00:04:56.280 ************************************ 00:04:56.280 02:21:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.539 * Looking for test storage... 00:04:56.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.539 02:21:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:56.539 02:21:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:56.539 02:21:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:56.539 02:21:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:56.539 02:21:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:56.539 02:21:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:56.539 02:21:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:56.539 02:21:37 -- scripts/common.sh@335 -- # IFS=.-: 00:04:56.539 02:21:37 -- scripts/common.sh@335 -- # read -ra ver1 00:04:56.539 02:21:37 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.539 02:21:37 -- scripts/common.sh@336 -- # read -ra ver2 00:04:56.539 02:21:37 -- scripts/common.sh@337 -- # local 'op=<' 00:04:56.539 02:21:37 -- scripts/common.sh@339 -- # ver1_l=2 00:04:56.539 02:21:37 -- scripts/common.sh@340 -- # ver2_l=1 00:04:56.539 02:21:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:56.539 02:21:37 -- scripts/common.sh@343 -- # case "$op" in 00:04:56.539 02:21:37 -- scripts/common.sh@344 -- # : 1 00:04:56.539 02:21:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:56.539 02:21:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.539 02:21:37 -- scripts/common.sh@364 -- # decimal 1 00:04:56.539 02:21:37 -- scripts/common.sh@352 -- # local d=1 00:04:56.539 02:21:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.539 02:21:37 -- scripts/common.sh@354 -- # echo 1 00:04:56.539 02:21:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:56.539 02:21:37 -- scripts/common.sh@365 -- # decimal 2 00:04:56.539 02:21:37 -- scripts/common.sh@352 -- # local d=2 00:04:56.539 02:21:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.539 02:21:37 -- scripts/common.sh@354 -- # echo 2 00:04:56.540 02:21:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:56.540 02:21:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:56.540 02:21:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:56.540 02:21:37 -- scripts/common.sh@367 -- # return 0 00:04:56.540 02:21:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.540 02:21:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.540 --rc genhtml_branch_coverage=1 00:04:56.540 --rc genhtml_function_coverage=1 00:04:56.540 --rc genhtml_legend=1 00:04:56.540 --rc geninfo_all_blocks=1 00:04:56.540 --rc geninfo_unexecuted_blocks=1 00:04:56.540 00:04:56.540 ' 00:04:56.540 02:21:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.540 --rc genhtml_branch_coverage=1 00:04:56.540 --rc genhtml_function_coverage=1 00:04:56.540 --rc genhtml_legend=1 00:04:56.540 --rc geninfo_all_blocks=1 00:04:56.540 --rc geninfo_unexecuted_blocks=1 00:04:56.540 00:04:56.540 ' 00:04:56.540 02:21:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.540 --rc genhtml_branch_coverage=1 00:04:56.540 --rc genhtml_function_coverage=1 00:04:56.540 --rc genhtml_legend=1 00:04:56.540 --rc geninfo_all_blocks=1 00:04:56.540 --rc geninfo_unexecuted_blocks=1 00:04:56.540 00:04:56.540 ' 00:04:56.540 02:21:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.540 --rc genhtml_branch_coverage=1 00:04:56.540 --rc genhtml_function_coverage=1 00:04:56.540 --rc genhtml_legend=1 00:04:56.540 --rc geninfo_all_blocks=1 00:04:56.540 --rc geninfo_unexecuted_blocks=1 00:04:56.540 00:04:56.540 ' 00:04:56.540 02:21:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.540 02:21:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56475 00:04:56.540 02:21:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56475 00:04:56.540 02:21:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.540 02:21:37 -- common/autotest_common.sh@829 -- # '[' -z 56475 ']' 00:04:56.540 02:21:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.540 02:21:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.540 02:21:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.540 02:21:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.540 02:21:37 -- common/autotest_common.sh@10 -- # set +x 00:04:56.540 [2024-11-21 02:21:37.141894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:56.540 [2024-11-21 02:21:37.142005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56475 ] 00:04:56.798 [2024-11-21 02:21:37.278510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.798 [2024-11-21 02:21:37.361666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.798 [2024-11-21 02:21:37.361853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.734 02:21:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.734 02:21:38 -- common/autotest_common.sh@862 -- # return 0 00:04:57.734 02:21:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.734 02:21:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.734 02:21:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.734 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:04:57.734 { 00:04:57.734 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.734 } 00:04:57.734 02:21:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.735 02:21:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.735 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:57.735 1 heaps totaling size 814.000000 MiB 00:04:57.735 size: 814.000000 MiB heap id: 0 00:04:57.735 end heaps---------- 00:04:57.735 8 mempools totaling size 598.116089 MiB 00:04:57.735 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.735 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.735 size: 84.521057 MiB name: bdev_io_56475 00:04:57.735 size: 51.011292 MiB name: evtpool_56475 00:04:57.735 size: 50.003479 MiB name: msgpool_56475 00:04:57.735 size: 21.763794 MiB name: PDU_Pool 00:04:57.735 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.735 size: 0.026123 MiB name: Session_Pool 00:04:57.735 end mempools------- 00:04:57.735 6 memzones totaling size 4.142822 MiB 00:04:57.735 size: 1.000366 MiB name: RG_ring_0_56475 00:04:57.735 size: 1.000366 MiB name: RG_ring_1_56475 00:04:57.735 size: 1.000366 MiB name: RG_ring_4_56475 00:04:57.735 size: 1.000366 MiB name: RG_ring_5_56475 00:04:57.735 size: 0.125366 MiB name: RG_ring_2_56475 00:04:57.735 size: 0.015991 MiB name: RG_ring_3_56475 00:04:57.735 end memzones------- 00:04:57.735 02:21:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.735 heap id: 0 total size: 814.000000 MiB number of busy elements: 214 number of free elements: 15 00:04:57.735 list of free elements. size: 12.487671 MiB 00:04:57.735 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:57.735 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:57.735 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:57.735 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:57.735 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:57.735 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:57.735 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:57.735 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:57.735 element at address: 0x200000200000 with size: 0.837219 MiB 00:04:57.735 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:04:57.735 element at address: 0x20000b200000 with size: 0.489990 MiB 00:04:57.735 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:57.735 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:57.735 element at address: 0x200027e00000 with size: 0.398315 MiB 00:04:57.735 element at address: 0x200003a00000 with size: 0.351685 MiB 00:04:57.735 list of standard malloc elements. size: 199.249756 MiB 00:04:57.735 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:57.735 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:57.735 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:57.735 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:57.735 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:57.735 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:57.735 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:57.735 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:57.735 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:57.735 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:57.735 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:57.736 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e66040 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:57.736 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:57.736 list of memzone associated elements. size: 602.262573 MiB 00:04:57.736 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:57.736 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.736 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:57.736 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.736 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:57.736 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56475_0 00:04:57.736 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:57.736 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56475_0 00:04:57.736 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:57.736 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56475_0 00:04:57.736 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:57.736 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.736 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:57.736 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.736 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:57.736 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56475 00:04:57.736 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:57.736 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56475 00:04:57.736 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:57.736 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56475 00:04:57.736 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:57.736 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.736 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:57.736 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.736 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:57.736 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.736 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:57.736 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.736 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:57.736 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56475 00:04:57.736 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:57.736 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56475 00:04:57.736 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:57.736 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56475 00:04:57.736 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:57.736 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56475 00:04:57.736 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:57.736 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56475 00:04:57.736 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:57.736 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.736 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:57.736 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.736 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:57.736 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.736 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:57.736 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56475 00:04:57.736 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:57.736 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.737 element at address: 0x200027e66100 with size: 0.023743 MiB 00:04:57.737 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.737 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:57.737 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56475 00:04:57.737 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:04:57.737 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.737 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:57.737 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56475 00:04:57.737 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:57.737 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56475 00:04:57.737 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:04:57.737 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.737 02:21:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.737 02:21:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56475 00:04:57.737 02:21:38 -- common/autotest_common.sh@936 -- # '[' -z 56475 ']' 00:04:57.737 02:21:38 -- common/autotest_common.sh@940 -- # kill -0 56475 00:04:57.737 02:21:38 -- common/autotest_common.sh@941 -- # uname 00:04:57.737 02:21:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.737 02:21:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56475 00:04:57.996 02:21:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.996 killing process with pid 56475 00:04:57.996 02:21:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.996 02:21:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56475' 00:04:57.996 02:21:38 -- common/autotest_common.sh@955 -- # kill 56475 00:04:57.996 02:21:38 -- common/autotest_common.sh@960 -- # wait 56475 00:04:58.563 00:04:58.563 real 0m2.031s 00:04:58.563 user 0m2.169s 00:04:58.563 sys 0m0.535s 00:04:58.563 02:21:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.563 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:04:58.563 ************************************ 00:04:58.563 END TEST dpdk_mem_utility 00:04:58.563 ************************************ 00:04:58.563 02:21:38 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.563 02:21:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.563 02:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.563 02:21:38 -- common/autotest_common.sh@10 -- # set +x 00:04:58.563 ************************************ 00:04:58.563 START TEST event 00:04:58.563 ************************************ 00:04:58.563 02:21:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.563 * Looking for test storage... 00:04:58.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:58.564 02:21:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:58.564 02:21:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:58.564 02:21:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:58.564 02:21:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:58.564 02:21:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:58.564 02:21:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:58.564 02:21:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:58.564 02:21:39 -- scripts/common.sh@335 -- # IFS=.-: 00:04:58.564 02:21:39 -- scripts/common.sh@335 -- # read -ra ver1 00:04:58.564 02:21:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.564 02:21:39 -- scripts/common.sh@336 -- # read -ra ver2 00:04:58.564 02:21:39 -- scripts/common.sh@337 -- # local 'op=<' 00:04:58.564 02:21:39 -- scripts/common.sh@339 -- # ver1_l=2 00:04:58.564 02:21:39 -- scripts/common.sh@340 -- # ver2_l=1 00:04:58.564 02:21:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:58.564 02:21:39 -- scripts/common.sh@343 -- # case "$op" in 00:04:58.564 02:21:39 -- scripts/common.sh@344 -- # : 1 00:04:58.564 02:21:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:58.564 02:21:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.564 02:21:39 -- scripts/common.sh@364 -- # decimal 1 00:04:58.564 02:21:39 -- scripts/common.sh@352 -- # local d=1 00:04:58.564 02:21:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.564 02:21:39 -- scripts/common.sh@354 -- # echo 1 00:04:58.564 02:21:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:58.564 02:21:39 -- scripts/common.sh@365 -- # decimal 2 00:04:58.564 02:21:39 -- scripts/common.sh@352 -- # local d=2 00:04:58.564 02:21:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.564 02:21:39 -- scripts/common.sh@354 -- # echo 2 00:04:58.564 02:21:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:58.564 02:21:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:58.564 02:21:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:58.564 02:21:39 -- scripts/common.sh@367 -- # return 0 00:04:58.564 02:21:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.564 02:21:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.564 --rc genhtml_branch_coverage=1 00:04:58.564 --rc genhtml_function_coverage=1 00:04:58.564 --rc genhtml_legend=1 00:04:58.564 --rc geninfo_all_blocks=1 00:04:58.564 --rc geninfo_unexecuted_blocks=1 00:04:58.564 00:04:58.564 ' 00:04:58.564 02:21:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.564 --rc genhtml_branch_coverage=1 00:04:58.564 --rc genhtml_function_coverage=1 00:04:58.564 --rc genhtml_legend=1 00:04:58.564 --rc geninfo_all_blocks=1 00:04:58.564 --rc geninfo_unexecuted_blocks=1 00:04:58.564 00:04:58.564 ' 00:04:58.564 02:21:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.564 --rc genhtml_branch_coverage=1 00:04:58.564 --rc genhtml_function_coverage=1 00:04:58.564 --rc genhtml_legend=1 00:04:58.564 --rc geninfo_all_blocks=1 00:04:58.564 --rc geninfo_unexecuted_blocks=1 00:04:58.564 00:04:58.564 ' 00:04:58.564 02:21:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:58.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.564 --rc genhtml_branch_coverage=1 00:04:58.564 --rc genhtml_function_coverage=1 00:04:58.564 --rc genhtml_legend=1 00:04:58.564 --rc geninfo_all_blocks=1 00:04:58.564 --rc geninfo_unexecuted_blocks=1 00:04:58.564 00:04:58.564 ' 00:04:58.564 02:21:39 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:58.564 02:21:39 -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.564 02:21:39 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.564 02:21:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:58.564 02:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.564 02:21:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.564 ************************************ 00:04:58.564 START TEST event_perf 00:04:58.564 ************************************ 00:04:58.564 02:21:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.821 Running I/O for 1 seconds...[2024-11-21 02:21:39.218620] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:58.821 [2024-11-21 02:21:39.218714] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56577 ] 00:04:58.821 [2024-11-21 02:21:39.351313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.821 [2024-11-21 02:21:39.435587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.821 [2024-11-21 02:21:39.435724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.821 [2024-11-21 02:21:39.435872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.821 [2024-11-21 02:21:39.435872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.196 Running I/O for 1 seconds... 00:05:00.196 lcore 0: 125322 00:05:00.196 lcore 1: 125324 00:05:00.196 lcore 2: 125322 00:05:00.196 lcore 3: 125321 00:05:00.196 done. 00:05:00.196 00:05:00.196 ************************************ 00:05:00.196 END TEST event_perf 00:05:00.196 ************************************ 00:05:00.196 real 0m1.367s 00:05:00.196 user 0m4.165s 00:05:00.196 sys 0m0.075s 00:05:00.196 02:21:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.196 02:21:40 -- common/autotest_common.sh@10 -- # set +x 00:05:00.196 02:21:40 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:00.196 02:21:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:00.196 02:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.196 02:21:40 -- common/autotest_common.sh@10 -- # set +x 00:05:00.196 ************************************ 00:05:00.196 START TEST event_reactor 00:05:00.196 ************************************ 00:05:00.196 02:21:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:00.196 [2024-11-21 02:21:40.636645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.196 [2024-11-21 02:21:40.636766] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56621 ] 00:05:00.196 [2024-11-21 02:21:40.773089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.455 [2024-11-21 02:21:40.862691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.429 test_start 00:05:01.429 oneshot 00:05:01.429 tick 100 00:05:01.429 tick 100 00:05:01.429 tick 250 00:05:01.429 tick 100 00:05:01.429 tick 100 00:05:01.429 tick 100 00:05:01.429 tick 250 00:05:01.429 tick 500 00:05:01.429 tick 100 00:05:01.429 tick 100 00:05:01.429 tick 250 00:05:01.429 tick 100 00:05:01.429 tick 100 00:05:01.429 test_end 00:05:01.429 00:05:01.429 real 0m1.347s 00:05:01.429 user 0m1.185s 00:05:01.429 sys 0m0.055s 00:05:01.429 ************************************ 00:05:01.429 END TEST event_reactor 00:05:01.429 ************************************ 00:05:01.429 02:21:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:01.429 02:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.429 02:21:42 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.429 02:21:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:01.429 02:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.429 02:21:42 -- common/autotest_common.sh@10 -- # set +x 00:05:01.429 ************************************ 00:05:01.429 START TEST event_reactor_perf 00:05:01.429 ************************************ 00:05:01.429 02:21:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.429 [2024-11-21 02:21:42.040731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:01.429 [2024-11-21 02:21:42.040850] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56651 ] 00:05:01.728 [2024-11-21 02:21:42.176848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.728 [2024-11-21 02:21:42.255390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.123 test_start 00:05:03.123 test_end 00:05:03.123 Performance: 477905 events per second 00:05:03.123 ************************************ 00:05:03.123 END TEST event_reactor_perf 00:05:03.123 ************************************ 00:05:03.123 00:05:03.123 real 0m1.350s 00:05:03.123 user 0m1.185s 00:05:03.123 sys 0m0.059s 00:05:03.123 02:21:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.123 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:03.123 02:21:43 -- event/event.sh@49 -- # uname -s 00:05:03.123 02:21:43 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:03.123 02:21:43 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.123 02:21:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.123 02:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.123 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:03.123 ************************************ 00:05:03.123 START TEST event_scheduler 00:05:03.123 ************************************ 00:05:03.123 02:21:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.123 * Looking for test storage... 00:05:03.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:03.123 02:21:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.123 02:21:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.123 02:21:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.123 02:21:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.123 02:21:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.123 02:21:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.123 02:21:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.123 02:21:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.123 02:21:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.123 02:21:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.123 02:21:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.123 02:21:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.123 02:21:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.123 02:21:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.123 02:21:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.123 02:21:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.123 02:21:43 -- scripts/common.sh@344 -- # : 1 00:05:03.123 02:21:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.123 02:21:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.123 02:21:43 -- scripts/common.sh@364 -- # decimal 1 00:05:03.123 02:21:43 -- scripts/common.sh@352 -- # local d=1 00:05:03.123 02:21:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.123 02:21:43 -- scripts/common.sh@354 -- # echo 1 00:05:03.123 02:21:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.123 02:21:43 -- scripts/common.sh@365 -- # decimal 2 00:05:03.123 02:21:43 -- scripts/common.sh@352 -- # local d=2 00:05:03.123 02:21:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.123 02:21:43 -- scripts/common.sh@354 -- # echo 2 00:05:03.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.123 02:21:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.123 02:21:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.123 02:21:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.123 02:21:43 -- scripts/common.sh@367 -- # return 0 00:05:03.123 02:21:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.123 02:21:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.123 --rc genhtml_branch_coverage=1 00:05:03.123 --rc genhtml_function_coverage=1 00:05:03.123 --rc genhtml_legend=1 00:05:03.123 --rc geninfo_all_blocks=1 00:05:03.123 --rc geninfo_unexecuted_blocks=1 00:05:03.123 00:05:03.123 ' 00:05:03.123 02:21:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.123 --rc genhtml_branch_coverage=1 00:05:03.123 --rc genhtml_function_coverage=1 00:05:03.123 --rc genhtml_legend=1 00:05:03.123 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 02:21:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.124 --rc genhtml_branch_coverage=1 00:05:03.124 --rc genhtml_function_coverage=1 00:05:03.124 --rc genhtml_legend=1 00:05:03.124 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 02:21:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.124 --rc genhtml_branch_coverage=1 00:05:03.124 --rc genhtml_function_coverage=1 00:05:03.124 --rc genhtml_legend=1 00:05:03.124 --rc geninfo_all_blocks=1 00:05:03.124 --rc geninfo_unexecuted_blocks=1 00:05:03.124 00:05:03.124 ' 00:05:03.124 02:21:43 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.124 02:21:43 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56714 00:05:03.124 02:21:43 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.124 02:21:43 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.124 02:21:43 -- scheduler/scheduler.sh@37 -- # waitforlisten 56714 00:05:03.124 02:21:43 -- common/autotest_common.sh@829 -- # '[' -z 56714 ']' 00:05:03.124 02:21:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.124 02:21:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.124 02:21:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.124 02:21:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.124 02:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:03.124 [2024-11-21 02:21:43.675915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.124 [2024-11-21 02:21:43.676185] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56714 ] 00:05:03.384 [2024-11-21 02:21:43.817712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.384 [2024-11-21 02:21:43.946901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.384 [2024-11-21 02:21:43.946993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.384 [2024-11-21 02:21:43.947149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.384 [2024-11-21 02:21:43.947154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.321 02:21:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.321 02:21:44 -- common/autotest_common.sh@862 -- # return 0 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 POWER: Env isn't set yet! 00:05:04.321 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:04.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.321 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.321 POWER: Attempting to initialise PSTAT power management... 00:05:04.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.321 POWER: Cannot set governor of lcore 0 to performance 00:05:04.321 POWER: Attempting to initialise AMD PSTATE power management... 00:05:04.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.321 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.321 POWER: Attempting to initialise CPPC power management... 00:05:04.321 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.321 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.321 POWER: Attempting to initialise VM power management... 00:05:04.321 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:04.321 POWER: Unable to set Power Management Environment for lcore 0 00:05:04.321 [2024-11-21 02:21:44.672134] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:04.321 [2024-11-21 02:21:44.672176] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:04.321 [2024-11-21 02:21:44.672227] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:04.321 [2024-11-21 02:21:44.672279] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:04.321 [2024-11-21 02:21:44.672309] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:04.321 [2024-11-21 02:21:44.672546] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 [2024-11-21 02:21:44.766331] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:04.321 02:21:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.321 02:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 ************************************ 00:05:04.321 START TEST scheduler_create_thread 00:05:04.321 ************************************ 00:05:04.321 02:21:44 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 2 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 3 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 4 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 5 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 6 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 7 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.321 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.321 8 00:05:04.321 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.321 02:21:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:04.321 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.322 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.322 9 00:05:04.322 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.322 02:21:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:04.322 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.322 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.322 10 00:05:04.322 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.322 02:21:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:04.322 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.322 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.322 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.322 02:21:44 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:04.322 02:21:44 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:04.322 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.322 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:04.322 02:21:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.322 02:21:44 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:04.322 02:21:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.322 02:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.698 02:21:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.698 02:21:46 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:05.698 02:21:46 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:05.698 02:21:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.698 02:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.076 02:21:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.076 ************************************ 00:05:07.076 END TEST scheduler_create_thread 00:05:07.076 ************************************ 00:05:07.076 00:05:07.076 real 0m2.612s 00:05:07.076 user 0m0.015s 00:05:07.076 sys 0m0.007s 00:05:07.076 02:21:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.076 02:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:07.076 02:21:47 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.076 02:21:47 -- scheduler/scheduler.sh@46 -- # killprocess 56714 00:05:07.076 02:21:47 -- common/autotest_common.sh@936 -- # '[' -z 56714 ']' 00:05:07.076 02:21:47 -- common/autotest_common.sh@940 -- # kill -0 56714 00:05:07.076 02:21:47 -- common/autotest_common.sh@941 -- # uname 00:05:07.076 02:21:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.076 02:21:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56714 00:05:07.076 killing process with pid 56714 00:05:07.076 02:21:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:07.076 02:21:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:07.076 02:21:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56714' 00:05:07.076 02:21:47 -- common/autotest_common.sh@955 -- # kill 56714 00:05:07.076 02:21:47 -- common/autotest_common.sh@960 -- # wait 56714 00:05:07.335 [2024-11-21 02:21:47.870061] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:07.593 00:05:07.593 real 0m4.759s 00:05:07.593 user 0m8.802s 00:05:07.593 sys 0m0.418s 00:05:07.593 02:21:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:07.593 ************************************ 00:05:07.593 END TEST event_scheduler 00:05:07.593 ************************************ 00:05:07.593 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:07.593 02:21:48 -- event/event.sh@51 -- # modprobe -n nbd 00:05:07.853 02:21:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:07.853 02:21:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.853 02:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.853 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:07.853 ************************************ 00:05:07.853 START TEST app_repeat 00:05:07.853 ************************************ 00:05:07.853 02:21:48 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:07.853 02:21:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.853 02:21:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.853 02:21:48 -- event/event.sh@13 -- # local nbd_list 00:05:07.853 02:21:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.853 02:21:48 -- event/event.sh@14 -- # local bdev_list 00:05:07.853 02:21:48 -- event/event.sh@15 -- # local repeat_times=4 00:05:07.853 02:21:48 -- event/event.sh@17 -- # modprobe nbd 00:05:07.853 Process app_repeat pid: 56837 00:05:07.853 02:21:48 -- event/event.sh@19 -- # repeat_pid=56837 00:05:07.853 02:21:48 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:07.853 02:21:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.853 02:21:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 56837' 00:05:07.853 02:21:48 -- event/event.sh@23 -- # for i in {0..2} 00:05:07.853 spdk_app_start Round 0 00:05:07.853 02:21:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:07.853 02:21:48 -- event/event.sh@25 -- # waitforlisten 56837 /var/tmp/spdk-nbd.sock 00:05:07.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.853 02:21:48 -- common/autotest_common.sh@829 -- # '[' -z 56837 ']' 00:05:07.853 02:21:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.853 02:21:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.853 02:21:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.853 02:21:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.853 02:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:07.853 [2024-11-21 02:21:48.283272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:07.853 [2024-11-21 02:21:48.283535] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56837 ] 00:05:07.853 [2024-11-21 02:21:48.422351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.112 [2024-11-21 02:21:48.519268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.112 [2024-11-21 02:21:48.519281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.678 02:21:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.678 02:21:49 -- common/autotest_common.sh@862 -- # return 0 00:05:08.678 02:21:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.936 Malloc0 00:05:08.936 02:21:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.504 Malloc1 00:05:09.504 02:21:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@12 -- # local i 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.504 02:21:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.763 /dev/nbd0 00:05:09.763 02:21:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.763 02:21:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.763 02:21:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:09.763 02:21:50 -- common/autotest_common.sh@867 -- # local i 00:05:09.763 02:21:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:09.763 02:21:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:09.763 02:21:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:09.763 02:21:50 -- common/autotest_common.sh@871 -- # break 00:05:09.763 02:21:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:09.763 02:21:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:09.763 02:21:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.763 1+0 records in 00:05:09.763 1+0 records out 00:05:09.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033396 s, 12.3 MB/s 00:05:09.763 02:21:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.763 02:21:50 -- common/autotest_common.sh@884 -- # size=4096 00:05:09.763 02:21:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.763 02:21:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:09.763 02:21:50 -- common/autotest_common.sh@887 -- # return 0 00:05:09.763 02:21:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.763 02:21:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.763 02:21:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.023 /dev/nbd1 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.023 02:21:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:10.023 02:21:50 -- common/autotest_common.sh@867 -- # local i 00:05:10.023 02:21:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:10.023 02:21:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:10.023 02:21:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:10.023 02:21:50 -- common/autotest_common.sh@871 -- # break 00:05:10.023 02:21:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:10.023 02:21:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:10.023 02:21:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.023 1+0 records in 00:05:10.023 1+0 records out 00:05:10.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292635 s, 14.0 MB/s 00:05:10.023 02:21:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.023 02:21:50 -- common/autotest_common.sh@884 -- # size=4096 00:05:10.023 02:21:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.023 02:21:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:10.023 02:21:50 -- common/autotest_common.sh@887 -- # return 0 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.023 02:21:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.283 { 00:05:10.283 "bdev_name": "Malloc0", 00:05:10.283 "nbd_device": "/dev/nbd0" 00:05:10.283 }, 00:05:10.283 { 00:05:10.283 "bdev_name": "Malloc1", 00:05:10.283 "nbd_device": "/dev/nbd1" 00:05:10.283 } 00:05:10.283 ]' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.283 { 00:05:10.283 "bdev_name": "Malloc0", 00:05:10.283 "nbd_device": "/dev/nbd0" 00:05:10.283 }, 00:05:10.283 { 00:05:10.283 "bdev_name": "Malloc1", 00:05:10.283 "nbd_device": "/dev/nbd1" 00:05:10.283 } 00:05:10.283 ]' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.283 /dev/nbd1' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.283 /dev/nbd1' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.283 256+0 records in 00:05:10.283 256+0 records out 00:05:10.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00969547 s, 108 MB/s 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.283 256+0 records in 00:05:10.283 256+0 records out 00:05:10.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216563 s, 48.4 MB/s 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.283 256+0 records in 00:05:10.283 256+0 records out 00:05:10.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252691 s, 41.5 MB/s 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@51 -- # local i 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.283 02:21:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@41 -- # break 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.542 02:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.801 02:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.801 02:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.801 02:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@41 -- # break 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.802 02:21:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@65 -- # true 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.369 02:21:51 -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.369 02:21:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.628 02:21:52 -- event/event.sh@35 -- # sleep 3 00:05:11.886 [2024-11-21 02:21:52.369275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.887 [2024-11-21 02:21:52.436297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.887 [2024-11-21 02:21:52.436309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.887 [2024-11-21 02:21:52.506495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.887 [2024-11-21 02:21:52.506563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.175 02:21:55 -- event/event.sh@23 -- # for i in {0..2} 00:05:15.175 spdk_app_start Round 1 00:05:15.175 02:21:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.175 02:21:55 -- event/event.sh@25 -- # waitforlisten 56837 /var/tmp/spdk-nbd.sock 00:05:15.175 02:21:55 -- common/autotest_common.sh@829 -- # '[' -z 56837 ']' 00:05:15.175 02:21:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.175 02:21:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.175 02:21:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.175 02:21:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.175 02:21:55 -- common/autotest_common.sh@10 -- # set +x 00:05:15.175 02:21:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.175 02:21:55 -- common/autotest_common.sh@862 -- # return 0 00:05:15.175 02:21:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.175 Malloc0 00:05:15.175 02:21:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.434 Malloc1 00:05:15.434 02:21:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@12 -- # local i 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.434 02:21:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.694 /dev/nbd0 00:05:15.694 02:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.694 02:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.694 02:21:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:15.694 02:21:56 -- common/autotest_common.sh@867 -- # local i 00:05:15.694 02:21:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:15.694 02:21:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:15.694 02:21:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:15.694 02:21:56 -- common/autotest_common.sh@871 -- # break 00:05:15.694 02:21:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:15.694 02:21:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:15.694 02:21:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.694 1+0 records in 00:05:15.694 1+0 records out 00:05:15.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243578 s, 16.8 MB/s 00:05:15.694 02:21:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.694 02:21:56 -- common/autotest_common.sh@884 -- # size=4096 00:05:15.694 02:21:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.694 02:21:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:15.694 02:21:56 -- common/autotest_common.sh@887 -- # return 0 00:05:15.694 02:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.694 02:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.694 02:21:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.694 /dev/nbd1 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.953 02:21:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:15.953 02:21:56 -- common/autotest_common.sh@867 -- # local i 00:05:15.953 02:21:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:15.953 02:21:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:15.953 02:21:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:15.953 02:21:56 -- common/autotest_common.sh@871 -- # break 00:05:15.953 02:21:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:15.953 02:21:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:15.953 02:21:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.953 1+0 records in 00:05:15.953 1+0 records out 00:05:15.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033815 s, 12.1 MB/s 00:05:15.953 02:21:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.953 02:21:56 -- common/autotest_common.sh@884 -- # size=4096 00:05:15.953 02:21:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.953 02:21:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:15.953 02:21:56 -- common/autotest_common.sh@887 -- # return 0 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.953 02:21:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.211 { 00:05:16.211 "bdev_name": "Malloc0", 00:05:16.211 "nbd_device": "/dev/nbd0" 00:05:16.211 }, 00:05:16.211 { 00:05:16.211 "bdev_name": "Malloc1", 00:05:16.211 "nbd_device": "/dev/nbd1" 00:05:16.211 } 00:05:16.211 ]' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.211 { 00:05:16.211 "bdev_name": "Malloc0", 00:05:16.211 "nbd_device": "/dev/nbd0" 00:05:16.211 }, 00:05:16.211 { 00:05:16.211 "bdev_name": "Malloc1", 00:05:16.211 "nbd_device": "/dev/nbd1" 00:05:16.211 } 00:05:16.211 ]' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.211 /dev/nbd1' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.211 /dev/nbd1' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.211 256+0 records in 00:05:16.211 256+0 records out 00:05:16.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073999 s, 142 MB/s 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.211 256+0 records in 00:05:16.211 256+0 records out 00:05:16.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224334 s, 46.7 MB/s 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.211 256+0 records in 00:05:16.211 256+0 records out 00:05:16.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290646 s, 36.1 MB/s 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.211 02:21:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.212 02:21:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@41 -- # break 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@41 -- # break 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.779 02:21:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.044 02:21:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@65 -- # true 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.311 02:21:57 -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.311 02:21:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.570 02:21:58 -- event/event.sh@35 -- # sleep 3 00:05:17.829 [2024-11-21 02:21:58.368686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.829 [2024-11-21 02:21:58.437249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.829 [2024-11-21 02:21:58.437261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.088 [2024-11-21 02:21:58.510409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.088 [2024-11-21 02:21:58.510484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.631 spdk_app_start Round 2 00:05:20.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.631 02:22:01 -- event/event.sh@23 -- # for i in {0..2} 00:05:20.632 02:22:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:20.632 02:22:01 -- event/event.sh@25 -- # waitforlisten 56837 /var/tmp/spdk-nbd.sock 00:05:20.632 02:22:01 -- common/autotest_common.sh@829 -- # '[' -z 56837 ']' 00:05:20.632 02:22:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.632 02:22:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.632 02:22:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.632 02:22:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.632 02:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:20.890 02:22:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.890 02:22:01 -- common/autotest_common.sh@862 -- # return 0 00:05:20.890 02:22:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.149 Malloc0 00:05:21.149 02:22:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.407 Malloc1 00:05:21.407 02:22:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.407 02:22:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.666 /dev/nbd0 00:05:21.666 02:22:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.666 02:22:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.666 02:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:21.666 02:22:02 -- common/autotest_common.sh@867 -- # local i 00:05:21.666 02:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.666 02:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.666 02:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:21.666 02:22:02 -- common/autotest_common.sh@871 -- # break 00:05:21.666 02:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.666 02:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.666 02:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.666 1+0 records in 00:05:21.666 1+0 records out 00:05:21.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337075 s, 12.2 MB/s 00:05:21.666 02:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.666 02:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:05:21.666 02:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.666 02:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.666 02:22:02 -- common/autotest_common.sh@887 -- # return 0 00:05:21.666 02:22:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.666 02:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.666 02:22:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.925 /dev/nbd1 00:05:21.925 02:22:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.925 02:22:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.925 02:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:21.925 02:22:02 -- common/autotest_common.sh@867 -- # local i 00:05:21.925 02:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:21.925 02:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:21.925 02:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:21.925 02:22:02 -- common/autotest_common.sh@871 -- # break 00:05:21.925 02:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:21.925 02:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:21.925 02:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.925 1+0 records in 00:05:21.925 1+0 records out 00:05:21.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306325 s, 13.4 MB/s 00:05:21.926 02:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.926 02:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:05:21.926 02:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.926 02:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:21.926 02:22:02 -- common/autotest_common.sh@887 -- # return 0 00:05:21.926 02:22:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.926 02:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.926 02:22:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.926 02:22:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.926 02:22:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.493 { 00:05:22.493 "bdev_name": "Malloc0", 00:05:22.493 "nbd_device": "/dev/nbd0" 00:05:22.493 }, 00:05:22.493 { 00:05:22.493 "bdev_name": "Malloc1", 00:05:22.493 "nbd_device": "/dev/nbd1" 00:05:22.493 } 00:05:22.493 ]' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.493 { 00:05:22.493 "bdev_name": "Malloc0", 00:05:22.493 "nbd_device": "/dev/nbd0" 00:05:22.493 }, 00:05:22.493 { 00:05:22.493 "bdev_name": "Malloc1", 00:05:22.493 "nbd_device": "/dev/nbd1" 00:05:22.493 } 00:05:22.493 ]' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.493 /dev/nbd1' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.493 /dev/nbd1' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.493 256+0 records in 00:05:22.493 256+0 records out 00:05:22.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934132 s, 112 MB/s 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.493 256+0 records in 00:05:22.493 256+0 records out 00:05:22.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252518 s, 41.5 MB/s 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.493 256+0 records in 00:05:22.493 256+0 records out 00:05:22.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233568 s, 44.9 MB/s 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.493 02:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@41 -- # break 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.752 02:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@41 -- # break 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.013 02:22:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.273 02:22:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.273 02:22:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.273 02:22:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@65 -- # true 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.532 02:22:03 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.532 02:22:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.791 02:22:04 -- event/event.sh@35 -- # sleep 3 00:05:24.050 [2024-11-21 02:22:04.541533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.050 [2024-11-21 02:22:04.610777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.050 [2024-11-21 02:22:04.610782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.050 [2024-11-21 02:22:04.680492] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.050 [2024-11-21 02:22:04.680582] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.337 02:22:07 -- event/event.sh@38 -- # waitforlisten 56837 /var/tmp/spdk-nbd.sock 00:05:27.337 02:22:07 -- common/autotest_common.sh@829 -- # '[' -z 56837 ']' 00:05:27.337 02:22:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.337 02:22:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.337 02:22:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.337 02:22:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.337 02:22:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.337 02:22:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.337 02:22:07 -- common/autotest_common.sh@862 -- # return 0 00:05:27.337 02:22:07 -- event/event.sh@39 -- # killprocess 56837 00:05:27.337 02:22:07 -- common/autotest_common.sh@936 -- # '[' -z 56837 ']' 00:05:27.337 02:22:07 -- common/autotest_common.sh@940 -- # kill -0 56837 00:05:27.337 02:22:07 -- common/autotest_common.sh@941 -- # uname 00:05:27.337 02:22:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.337 02:22:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56837 00:05:27.337 02:22:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:27.337 killing process with pid 56837 00:05:27.337 02:22:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:27.337 02:22:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56837' 00:05:27.337 02:22:07 -- common/autotest_common.sh@955 -- # kill 56837 00:05:27.338 02:22:07 -- common/autotest_common.sh@960 -- # wait 56837 00:05:27.338 spdk_app_start is called in Round 0. 00:05:27.338 Shutdown signal received, stop current app iteration 00:05:27.338 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.338 spdk_app_start is called in Round 1. 00:05:27.338 Shutdown signal received, stop current app iteration 00:05:27.338 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.338 spdk_app_start is called in Round 2. 00:05:27.338 Shutdown signal received, stop current app iteration 00:05:27.338 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:27.338 spdk_app_start is called in Round 3. 00:05:27.338 Shutdown signal received, stop current app iteration 00:05:27.338 02:22:07 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:27.338 02:22:07 -- event/event.sh@42 -- # return 0 00:05:27.338 00:05:27.338 real 0m19.639s 00:05:27.338 user 0m43.980s 00:05:27.338 sys 0m3.277s 00:05:27.338 02:22:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:27.338 02:22:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.338 ************************************ 00:05:27.338 END TEST app_repeat 00:05:27.338 ************************************ 00:05:27.338 02:22:07 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:27.338 02:22:07 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.338 02:22:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.338 02:22:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.338 02:22:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.338 ************************************ 00:05:27.338 START TEST cpu_locks 00:05:27.338 ************************************ 00:05:27.338 02:22:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:27.597 * Looking for test storage... 00:05:27.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.597 02:22:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.597 02:22:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.597 02:22:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.597 02:22:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.597 02:22:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.597 02:22:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.597 02:22:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.597 02:22:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.597 02:22:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.597 02:22:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.597 02:22:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.597 02:22:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.597 02:22:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.597 02:22:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.597 02:22:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.597 02:22:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.597 02:22:08 -- scripts/common.sh@344 -- # : 1 00:05:27.597 02:22:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.597 02:22:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.597 02:22:08 -- scripts/common.sh@364 -- # decimal 1 00:05:27.597 02:22:08 -- scripts/common.sh@352 -- # local d=1 00:05:27.597 02:22:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.597 02:22:08 -- scripts/common.sh@354 -- # echo 1 00:05:27.597 02:22:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.597 02:22:08 -- scripts/common.sh@365 -- # decimal 2 00:05:27.597 02:22:08 -- scripts/common.sh@352 -- # local d=2 00:05:27.597 02:22:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.597 02:22:08 -- scripts/common.sh@354 -- # echo 2 00:05:27.597 02:22:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.597 02:22:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.597 02:22:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.597 02:22:08 -- scripts/common.sh@367 -- # return 0 00:05:27.597 02:22:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.597 02:22:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.597 --rc genhtml_branch_coverage=1 00:05:27.597 --rc genhtml_function_coverage=1 00:05:27.597 --rc genhtml_legend=1 00:05:27.597 --rc geninfo_all_blocks=1 00:05:27.597 --rc geninfo_unexecuted_blocks=1 00:05:27.597 00:05:27.597 ' 00:05:27.597 02:22:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.597 --rc genhtml_branch_coverage=1 00:05:27.597 --rc genhtml_function_coverage=1 00:05:27.597 --rc genhtml_legend=1 00:05:27.597 --rc geninfo_all_blocks=1 00:05:27.598 --rc geninfo_unexecuted_blocks=1 00:05:27.598 00:05:27.598 ' 00:05:27.598 02:22:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.598 --rc genhtml_branch_coverage=1 00:05:27.598 --rc genhtml_function_coverage=1 00:05:27.598 --rc genhtml_legend=1 00:05:27.598 --rc geninfo_all_blocks=1 00:05:27.598 --rc geninfo_unexecuted_blocks=1 00:05:27.598 00:05:27.598 ' 00:05:27.598 02:22:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.598 --rc genhtml_branch_coverage=1 00:05:27.598 --rc genhtml_function_coverage=1 00:05:27.598 --rc genhtml_legend=1 00:05:27.598 --rc geninfo_all_blocks=1 00:05:27.598 --rc geninfo_unexecuted_blocks=1 00:05:27.598 00:05:27.598 ' 00:05:27.598 02:22:08 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:27.598 02:22:08 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:27.598 02:22:08 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:27.598 02:22:08 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:27.598 02:22:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.598 02:22:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.598 02:22:08 -- common/autotest_common.sh@10 -- # set +x 00:05:27.598 ************************************ 00:05:27.598 START TEST default_locks 00:05:27.598 ************************************ 00:05:27.598 02:22:08 -- common/autotest_common.sh@1114 -- # default_locks 00:05:27.598 02:22:08 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57477 00:05:27.598 02:22:08 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.598 02:22:08 -- event/cpu_locks.sh@47 -- # waitforlisten 57477 00:05:27.598 02:22:08 -- common/autotest_common.sh@829 -- # '[' -z 57477 ']' 00:05:27.598 02:22:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.598 02:22:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.598 02:22:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.598 02:22:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.598 02:22:08 -- common/autotest_common.sh@10 -- # set +x 00:05:27.598 [2024-11-21 02:22:08.213872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.598 [2024-11-21 02:22:08.213960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57477 ] 00:05:27.856 [2024-11-21 02:22:08.350533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.856 [2024-11-21 02:22:08.434102] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.856 [2024-11-21 02:22:08.434258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.793 02:22:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.793 02:22:09 -- common/autotest_common.sh@862 -- # return 0 00:05:28.793 02:22:09 -- event/cpu_locks.sh@49 -- # locks_exist 57477 00:05:28.793 02:22:09 -- event/cpu_locks.sh@22 -- # lslocks -p 57477 00:05:28.793 02:22:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.793 02:22:09 -- event/cpu_locks.sh@50 -- # killprocess 57477 00:05:28.793 02:22:09 -- common/autotest_common.sh@936 -- # '[' -z 57477 ']' 00:05:28.793 02:22:09 -- common/autotest_common.sh@940 -- # kill -0 57477 00:05:28.793 02:22:09 -- common/autotest_common.sh@941 -- # uname 00:05:28.793 02:22:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.794 02:22:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57477 00:05:28.794 02:22:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.794 killing process with pid 57477 00:05:28.794 02:22:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.794 02:22:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57477' 00:05:28.794 02:22:09 -- common/autotest_common.sh@955 -- # kill 57477 00:05:28.794 02:22:09 -- common/autotest_common.sh@960 -- # wait 57477 00:05:29.360 02:22:09 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57477 00:05:29.360 02:22:09 -- common/autotest_common.sh@650 -- # local es=0 00:05:29.361 02:22:09 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57477 00:05:29.361 02:22:09 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:29.361 02:22:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.361 02:22:09 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:29.361 02:22:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.361 02:22:09 -- common/autotest_common.sh@653 -- # waitforlisten 57477 00:05:29.361 02:22:09 -- common/autotest_common.sh@829 -- # '[' -z 57477 ']' 00:05:29.361 02:22:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.361 02:22:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.361 02:22:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.361 02:22:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.361 02:22:09 -- common/autotest_common.sh@10 -- # set +x 00:05:29.361 ERROR: process (pid: 57477) is no longer running 00:05:29.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57477) - No such process 00:05:29.361 02:22:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.361 02:22:09 -- common/autotest_common.sh@862 -- # return 1 00:05:29.361 02:22:09 -- common/autotest_common.sh@653 -- # es=1 00:05:29.361 02:22:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.361 02:22:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.361 02:22:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.361 02:22:09 -- event/cpu_locks.sh@54 -- # no_locks 00:05:29.361 02:22:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.361 02:22:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.361 02:22:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.361 00:05:29.361 real 0m1.815s 00:05:29.361 user 0m1.828s 00:05:29.361 sys 0m0.557s 00:05:29.361 02:22:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.361 ************************************ 00:05:29.361 END TEST default_locks 00:05:29.361 ************************************ 00:05:29.361 02:22:09 -- common/autotest_common.sh@10 -- # set +x 00:05:29.619 02:22:10 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:29.619 02:22:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.620 02:22:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.620 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.620 ************************************ 00:05:29.620 START TEST default_locks_via_rpc 00:05:29.620 ************************************ 00:05:29.620 02:22:10 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:29.620 02:22:10 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57538 00:05:29.620 02:22:10 -- event/cpu_locks.sh@63 -- # waitforlisten 57538 00:05:29.620 02:22:10 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.620 02:22:10 -- common/autotest_common.sh@829 -- # '[' -z 57538 ']' 00:05:29.620 02:22:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.620 02:22:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.620 02:22:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.620 02:22:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.620 02:22:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.620 [2024-11-21 02:22:10.090947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.620 [2024-11-21 02:22:10.091082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57538 ] 00:05:29.620 [2024-11-21 02:22:10.227083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.879 [2024-11-21 02:22:10.313796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.879 [2024-11-21 02:22:10.313972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.448 02:22:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.448 02:22:11 -- common/autotest_common.sh@862 -- # return 0 00:05:30.448 02:22:11 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:30.448 02:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.448 02:22:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.448 02:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.448 02:22:11 -- event/cpu_locks.sh@67 -- # no_locks 00:05:30.448 02:22:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:30.448 02:22:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:30.448 02:22:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:30.448 02:22:11 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.448 02:22:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.448 02:22:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.708 02:22:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.708 02:22:11 -- event/cpu_locks.sh@71 -- # locks_exist 57538 00:05:30.708 02:22:11 -- event/cpu_locks.sh@22 -- # lslocks -p 57538 00:05:30.708 02:22:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.967 02:22:11 -- event/cpu_locks.sh@73 -- # killprocess 57538 00:05:30.967 02:22:11 -- common/autotest_common.sh@936 -- # '[' -z 57538 ']' 00:05:30.967 02:22:11 -- common/autotest_common.sh@940 -- # kill -0 57538 00:05:30.967 02:22:11 -- common/autotest_common.sh@941 -- # uname 00:05:30.967 02:22:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.967 02:22:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57538 00:05:30.967 02:22:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.967 killing process with pid 57538 00:05:30.967 02:22:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.967 02:22:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57538' 00:05:30.967 02:22:11 -- common/autotest_common.sh@955 -- # kill 57538 00:05:30.967 02:22:11 -- common/autotest_common.sh@960 -- # wait 57538 00:05:31.533 00:05:31.533 real 0m1.952s 00:05:31.533 user 0m2.033s 00:05:31.533 sys 0m0.590s 00:05:31.533 ************************************ 00:05:31.533 END TEST default_locks_via_rpc 00:05:31.533 ************************************ 00:05:31.533 02:22:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.533 02:22:11 -- common/autotest_common.sh@10 -- # set +x 00:05:31.533 02:22:12 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.533 02:22:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.533 02:22:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.533 02:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.533 ************************************ 00:05:31.533 START TEST non_locking_app_on_locked_coremask 00:05:31.533 ************************************ 00:05:31.533 02:22:12 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:31.533 02:22:12 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57607 00:05:31.533 02:22:12 -- event/cpu_locks.sh@81 -- # waitforlisten 57607 /var/tmp/spdk.sock 00:05:31.533 02:22:12 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.533 02:22:12 -- common/autotest_common.sh@829 -- # '[' -z 57607 ']' 00:05:31.533 02:22:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.533 02:22:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.533 02:22:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.533 02:22:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.533 02:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.533 [2024-11-21 02:22:12.078445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.533 [2024-11-21 02:22:12.078516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57607 ] 00:05:31.792 [2024-11-21 02:22:12.206546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.792 [2024-11-21 02:22:12.289643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.792 [2024-11-21 02:22:12.289826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.795 02:22:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.795 02:22:13 -- common/autotest_common.sh@862 -- # return 0 00:05:32.795 02:22:13 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57635 00:05:32.795 02:22:13 -- event/cpu_locks.sh@85 -- # waitforlisten 57635 /var/tmp/spdk2.sock 00:05:32.795 02:22:13 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.795 02:22:13 -- common/autotest_common.sh@829 -- # '[' -z 57635 ']' 00:05:32.795 02:22:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.795 02:22:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.795 02:22:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.795 02:22:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.795 02:22:13 -- common/autotest_common.sh@10 -- # set +x 00:05:32.795 [2024-11-21 02:22:13.162348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:32.795 [2024-11-21 02:22:13.162449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57635 ] 00:05:32.795 [2024-11-21 02:22:13.298854] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.795 [2024-11-21 02:22:13.298886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.054 [2024-11-21 02:22:13.471674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.054 [2024-11-21 02:22:13.471845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.621 02:22:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.621 02:22:14 -- common/autotest_common.sh@862 -- # return 0 00:05:33.621 02:22:14 -- event/cpu_locks.sh@87 -- # locks_exist 57607 00:05:33.621 02:22:14 -- event/cpu_locks.sh@22 -- # lslocks -p 57607 00:05:33.621 02:22:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.558 02:22:14 -- event/cpu_locks.sh@89 -- # killprocess 57607 00:05:34.558 02:22:14 -- common/autotest_common.sh@936 -- # '[' -z 57607 ']' 00:05:34.558 02:22:14 -- common/autotest_common.sh@940 -- # kill -0 57607 00:05:34.558 02:22:14 -- common/autotest_common.sh@941 -- # uname 00:05:34.558 02:22:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.558 02:22:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57607 00:05:34.558 02:22:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.558 02:22:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.558 killing process with pid 57607 00:05:34.558 02:22:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57607' 00:05:34.558 02:22:15 -- common/autotest_common.sh@955 -- # kill 57607 00:05:34.558 02:22:15 -- common/autotest_common.sh@960 -- # wait 57607 00:05:35.496 02:22:16 -- event/cpu_locks.sh@90 -- # killprocess 57635 00:05:35.496 02:22:16 -- common/autotest_common.sh@936 -- # '[' -z 57635 ']' 00:05:35.496 02:22:16 -- common/autotest_common.sh@940 -- # kill -0 57635 00:05:35.496 02:22:16 -- common/autotest_common.sh@941 -- # uname 00:05:35.496 02:22:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.496 02:22:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57635 00:05:35.496 02:22:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.496 02:22:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.496 killing process with pid 57635 00:05:35.496 02:22:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57635' 00:05:35.496 02:22:16 -- common/autotest_common.sh@955 -- # kill 57635 00:05:35.496 02:22:16 -- common/autotest_common.sh@960 -- # wait 57635 00:05:36.063 00:05:36.063 real 0m4.619s 00:05:36.063 user 0m5.031s 00:05:36.063 sys 0m1.261s 00:05:36.063 02:22:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.063 02:22:16 -- common/autotest_common.sh@10 -- # set +x 00:05:36.063 ************************************ 00:05:36.063 END TEST non_locking_app_on_locked_coremask 00:05:36.063 ************************************ 00:05:36.063 02:22:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:36.063 02:22:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.063 02:22:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.063 02:22:16 -- common/autotest_common.sh@10 -- # set +x 00:05:36.063 ************************************ 00:05:36.063 START TEST locking_app_on_unlocked_coremask 00:05:36.063 ************************************ 00:05:36.063 02:22:16 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:36.063 02:22:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57720 00:05:36.063 02:22:16 -- event/cpu_locks.sh@99 -- # waitforlisten 57720 /var/tmp/spdk.sock 00:05:36.063 02:22:16 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:36.063 02:22:16 -- common/autotest_common.sh@829 -- # '[' -z 57720 ']' 00:05:36.063 02:22:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.063 02:22:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.063 02:22:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.063 02:22:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.063 02:22:16 -- common/autotest_common.sh@10 -- # set +x 00:05:36.322 [2024-11-21 02:22:16.767003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.322 [2024-11-21 02:22:16.767119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57720 ] 00:05:36.322 [2024-11-21 02:22:16.905547] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.322 [2024-11-21 02:22:16.905597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.580 [2024-11-21 02:22:16.987787] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.580 [2024-11-21 02:22:16.987938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.147 02:22:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.147 02:22:17 -- common/autotest_common.sh@862 -- # return 0 00:05:37.147 02:22:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57748 00:05:37.147 02:22:17 -- event/cpu_locks.sh@103 -- # waitforlisten 57748 /var/tmp/spdk2.sock 00:05:37.147 02:22:17 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.147 02:22:17 -- common/autotest_common.sh@829 -- # '[' -z 57748 ']' 00:05:37.147 02:22:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.147 02:22:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.147 02:22:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.147 02:22:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.147 02:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:37.406 [2024-11-21 02:22:17.811879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.406 [2024-11-21 02:22:17.811966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57748 ] 00:05:37.406 [2024-11-21 02:22:17.950500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.665 [2024-11-21 02:22:18.119098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.665 [2024-11-21 02:22:18.119245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.233 02:22:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.233 02:22:18 -- common/autotest_common.sh@862 -- # return 0 00:05:38.233 02:22:18 -- event/cpu_locks.sh@105 -- # locks_exist 57748 00:05:38.233 02:22:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.233 02:22:18 -- event/cpu_locks.sh@22 -- # lslocks -p 57748 00:05:39.168 02:22:19 -- event/cpu_locks.sh@107 -- # killprocess 57720 00:05:39.168 02:22:19 -- common/autotest_common.sh@936 -- # '[' -z 57720 ']' 00:05:39.168 02:22:19 -- common/autotest_common.sh@940 -- # kill -0 57720 00:05:39.168 02:22:19 -- common/autotest_common.sh@941 -- # uname 00:05:39.168 02:22:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:39.168 02:22:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57720 00:05:39.168 02:22:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:39.168 02:22:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:39.168 killing process with pid 57720 00:05:39.169 02:22:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57720' 00:05:39.169 02:22:19 -- common/autotest_common.sh@955 -- # kill 57720 00:05:39.169 02:22:19 -- common/autotest_common.sh@960 -- # wait 57720 00:05:40.106 02:22:20 -- event/cpu_locks.sh@108 -- # killprocess 57748 00:05:40.106 02:22:20 -- common/autotest_common.sh@936 -- # '[' -z 57748 ']' 00:05:40.106 02:22:20 -- common/autotest_common.sh@940 -- # kill -0 57748 00:05:40.106 02:22:20 -- common/autotest_common.sh@941 -- # uname 00:05:40.107 02:22:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.107 02:22:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57748 00:05:40.107 02:22:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.107 killing process with pid 57748 00:05:40.107 02:22:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.107 02:22:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57748' 00:05:40.107 02:22:20 -- common/autotest_common.sh@955 -- # kill 57748 00:05:40.107 02:22:20 -- common/autotest_common.sh@960 -- # wait 57748 00:05:40.675 00:05:40.675 real 0m4.508s 00:05:40.675 user 0m4.832s 00:05:40.675 sys 0m1.271s 00:05:40.675 02:22:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.675 02:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.675 ************************************ 00:05:40.675 END TEST locking_app_on_unlocked_coremask 00:05:40.675 ************************************ 00:05:40.675 02:22:21 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.675 02:22:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.675 02:22:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.675 02:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.675 ************************************ 00:05:40.675 START TEST locking_app_on_locked_coremask 00:05:40.675 ************************************ 00:05:40.675 02:22:21 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:40.675 02:22:21 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57832 00:05:40.675 02:22:21 -- event/cpu_locks.sh@116 -- # waitforlisten 57832 /var/tmp/spdk.sock 00:05:40.675 02:22:21 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.675 02:22:21 -- common/autotest_common.sh@829 -- # '[' -z 57832 ']' 00:05:40.675 02:22:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.675 02:22:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.675 02:22:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.675 02:22:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.675 02:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.934 [2024-11-21 02:22:21.334625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.934 [2024-11-21 02:22:21.334810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57832 ] 00:05:40.934 [2024-11-21 02:22:21.467831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.934 [2024-11-21 02:22:21.545439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.934 [2024-11-21 02:22:21.545603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.871 02:22:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.871 02:22:22 -- common/autotest_common.sh@862 -- # return 0 00:05:41.871 02:22:22 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57860 00:05:41.871 02:22:22 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57860 /var/tmp/spdk2.sock 00:05:41.871 02:22:22 -- common/autotest_common.sh@650 -- # local es=0 00:05:41.871 02:22:22 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57860 /var/tmp/spdk2.sock 00:05:41.871 02:22:22 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.871 02:22:22 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.871 02:22:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.871 02:22:22 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.871 02:22:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.871 02:22:22 -- common/autotest_common.sh@653 -- # waitforlisten 57860 /var/tmp/spdk2.sock 00:05:41.871 02:22:22 -- common/autotest_common.sh@829 -- # '[' -z 57860 ']' 00:05:41.871 02:22:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.871 02:22:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.871 02:22:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.871 02:22:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.871 02:22:22 -- common/autotest_common.sh@10 -- # set +x 00:05:41.871 [2024-11-21 02:22:22.387788] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.871 [2024-11-21 02:22:22.387911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57860 ] 00:05:42.131 [2024-11-21 02:22:22.527857] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57832 has claimed it. 00:05:42.131 [2024-11-21 02:22:22.527911] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:42.700 ERROR: process (pid: 57860) is no longer running 00:05:42.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57860) - No such process 00:05:42.700 02:22:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.700 02:22:23 -- common/autotest_common.sh@862 -- # return 1 00:05:42.700 02:22:23 -- common/autotest_common.sh@653 -- # es=1 00:05:42.700 02:22:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:42.700 02:22:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:42.700 02:22:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:42.700 02:22:23 -- event/cpu_locks.sh@122 -- # locks_exist 57832 00:05:42.700 02:22:23 -- event/cpu_locks.sh@22 -- # lslocks -p 57832 00:05:42.700 02:22:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.959 02:22:23 -- event/cpu_locks.sh@124 -- # killprocess 57832 00:05:42.959 02:22:23 -- common/autotest_common.sh@936 -- # '[' -z 57832 ']' 00:05:42.959 02:22:23 -- common/autotest_common.sh@940 -- # kill -0 57832 00:05:42.959 02:22:23 -- common/autotest_common.sh@941 -- # uname 00:05:42.959 02:22:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.959 02:22:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57832 00:05:42.959 02:22:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.959 killing process with pid 57832 00:05:42.959 02:22:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.959 02:22:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57832' 00:05:42.959 02:22:23 -- common/autotest_common.sh@955 -- # kill 57832 00:05:42.959 02:22:23 -- common/autotest_common.sh@960 -- # wait 57832 00:05:43.526 00:05:43.526 real 0m2.715s 00:05:43.526 user 0m3.098s 00:05:43.526 sys 0m0.644s 00:05:43.526 02:22:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.526 02:22:23 -- common/autotest_common.sh@10 -- # set +x 00:05:43.526 ************************************ 00:05:43.526 END TEST locking_app_on_locked_coremask 00:05:43.526 ************************************ 00:05:43.526 02:22:24 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:43.526 02:22:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.526 02:22:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.526 02:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:43.526 ************************************ 00:05:43.526 START TEST locking_overlapped_coremask 00:05:43.526 ************************************ 00:05:43.526 02:22:24 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:43.526 02:22:24 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57912 00:05:43.526 02:22:24 -- event/cpu_locks.sh@133 -- # waitforlisten 57912 /var/tmp/spdk.sock 00:05:43.526 02:22:24 -- common/autotest_common.sh@829 -- # '[' -z 57912 ']' 00:05:43.526 02:22:24 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:43.526 02:22:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.526 02:22:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.526 02:22:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.526 02:22:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.526 02:22:24 -- common/autotest_common.sh@10 -- # set +x 00:05:43.526 [2024-11-21 02:22:24.107480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.526 [2024-11-21 02:22:24.107594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57912 ] 00:05:43.785 [2024-11-21 02:22:24.238660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.785 [2024-11-21 02:22:24.321664] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.785 [2024-11-21 02:22:24.322137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.785 [2024-11-21 02:22:24.322274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.785 [2024-11-21 02:22:24.322279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.720 02:22:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.720 02:22:25 -- common/autotest_common.sh@862 -- # return 0 00:05:44.720 02:22:25 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57942 00:05:44.720 02:22:25 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:44.720 02:22:25 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57942 /var/tmp/spdk2.sock 00:05:44.720 02:22:25 -- common/autotest_common.sh@650 -- # local es=0 00:05:44.720 02:22:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57942 /var/tmp/spdk2.sock 00:05:44.720 02:22:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.720 02:22:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.720 02:22:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.720 02:22:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.720 02:22:25 -- common/autotest_common.sh@653 -- # waitforlisten 57942 /var/tmp/spdk2.sock 00:05:44.720 02:22:25 -- common/autotest_common.sh@829 -- # '[' -z 57942 ']' 00:05:44.720 02:22:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.720 02:22:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.720 02:22:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.720 02:22:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.720 02:22:25 -- common/autotest_common.sh@10 -- # set +x 00:05:44.720 [2024-11-21 02:22:25.101049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.720 [2024-11-21 02:22:25.101685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57942 ] 00:05:44.720 [2024-11-21 02:22:25.248673] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57912 has claimed it. 00:05:44.720 [2024-11-21 02:22:25.248735] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.287 ERROR: process (pid: 57942) is no longer running 00:05:45.287 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57942) - No such process 00:05:45.287 02:22:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.287 02:22:25 -- common/autotest_common.sh@862 -- # return 1 00:05:45.287 02:22:25 -- common/autotest_common.sh@653 -- # es=1 00:05:45.287 02:22:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.287 02:22:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.287 02:22:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.287 02:22:25 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:45.287 02:22:25 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:45.287 02:22:25 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:45.287 02:22:25 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:45.287 02:22:25 -- event/cpu_locks.sh@141 -- # killprocess 57912 00:05:45.287 02:22:25 -- common/autotest_common.sh@936 -- # '[' -z 57912 ']' 00:05:45.287 02:22:25 -- common/autotest_common.sh@940 -- # kill -0 57912 00:05:45.287 02:22:25 -- common/autotest_common.sh@941 -- # uname 00:05:45.287 02:22:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.287 02:22:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57912 00:05:45.287 02:22:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.287 killing process with pid 57912 00:05:45.287 02:22:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.287 02:22:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57912' 00:05:45.287 02:22:25 -- common/autotest_common.sh@955 -- # kill 57912 00:05:45.287 02:22:25 -- common/autotest_common.sh@960 -- # wait 57912 00:05:45.855 00:05:45.855 real 0m2.297s 00:05:45.855 user 0m6.237s 00:05:45.855 sys 0m0.503s 00:05:45.855 02:22:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.855 02:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 ************************************ 00:05:45.855 END TEST locking_overlapped_coremask 00:05:45.855 ************************************ 00:05:45.855 02:22:26 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.855 02:22:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.855 02:22:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.855 02:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 ************************************ 00:05:45.855 START TEST locking_overlapped_coremask_via_rpc 00:05:45.855 ************************************ 00:05:45.855 02:22:26 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:45.855 02:22:26 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=57988 00:05:45.855 02:22:26 -- event/cpu_locks.sh@149 -- # waitforlisten 57988 /var/tmp/spdk.sock 00:05:45.855 02:22:26 -- common/autotest_common.sh@829 -- # '[' -z 57988 ']' 00:05:45.855 02:22:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.855 02:22:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.855 02:22:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.855 02:22:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.855 02:22:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 02:22:26 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.855 [2024-11-21 02:22:26.445397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.855 [2024-11-21 02:22:26.445476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57988 ] 00:05:46.128 [2024-11-21 02:22:26.572380] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.128 [2024-11-21 02:22:26.572414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.128 [2024-11-21 02:22:26.677780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.128 [2024-11-21 02:22:26.678062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.128 [2024-11-21 02:22:26.678176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.128 [2024-11-21 02:22:26.678181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.100 02:22:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.100 02:22:27 -- common/autotest_common.sh@862 -- # return 0 00:05:47.100 02:22:27 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58018 00:05:47.100 02:22:27 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:47.100 02:22:27 -- event/cpu_locks.sh@153 -- # waitforlisten 58018 /var/tmp/spdk2.sock 00:05:47.100 02:22:27 -- common/autotest_common.sh@829 -- # '[' -z 58018 ']' 00:05:47.100 02:22:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.100 02:22:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.100 02:22:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.100 02:22:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.100 02:22:27 -- common/autotest_common.sh@10 -- # set +x 00:05:47.100 [2024-11-21 02:22:27.468131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.100 [2024-11-21 02:22:27.468231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58018 ] 00:05:47.100 [2024-11-21 02:22:27.605435] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.100 [2024-11-21 02:22:27.605491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.359 [2024-11-21 02:22:27.811164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.359 [2024-11-21 02:22:27.811485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.359 [2024-11-21 02:22:27.814853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:47.359 [2024-11-21 02:22:27.814856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.925 02:22:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.925 02:22:28 -- common/autotest_common.sh@862 -- # return 0 00:05:47.925 02:22:28 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:47.925 02:22:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.925 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:47.925 02:22:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.925 02:22:28 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.925 02:22:28 -- common/autotest_common.sh@650 -- # local es=0 00:05:47.925 02:22:28 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.925 02:22:28 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:47.926 02:22:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.926 02:22:28 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:47.926 02:22:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.926 02:22:28 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:47.926 02:22:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.926 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:47.926 [2024-11-21 02:22:28.524889] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57988 has claimed it. 00:05:47.926 2024/11/21 02:22:28 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:47.926 request: 00:05:47.926 { 00:05:47.926 "method": "framework_enable_cpumask_locks", 00:05:47.926 "params": {} 00:05:47.926 } 00:05:47.926 Got JSON-RPC error response 00:05:47.926 GoRPCClient: error on JSON-RPC call 00:05:47.926 02:22:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:47.926 02:22:28 -- common/autotest_common.sh@653 -- # es=1 00:05:47.926 02:22:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.926 02:22:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.926 02:22:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.926 02:22:28 -- event/cpu_locks.sh@158 -- # waitforlisten 57988 /var/tmp/spdk.sock 00:05:47.926 02:22:28 -- common/autotest_common.sh@829 -- # '[' -z 57988 ']' 00:05:47.926 02:22:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.926 02:22:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.926 02:22:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.926 02:22:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.926 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.184 02:22:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.184 02:22:28 -- common/autotest_common.sh@862 -- # return 0 00:05:48.184 02:22:28 -- event/cpu_locks.sh@159 -- # waitforlisten 58018 /var/tmp/spdk2.sock 00:05:48.184 02:22:28 -- common/autotest_common.sh@829 -- # '[' -z 58018 ']' 00:05:48.184 02:22:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.184 02:22:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.184 02:22:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.184 02:22:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.184 02:22:28 -- common/autotest_common.sh@10 -- # set +x 00:05:48.442 02:22:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.442 02:22:29 -- common/autotest_common.sh@862 -- # return 0 00:05:48.442 02:22:29 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.442 02:22:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.442 02:22:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.442 02:22:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.442 00:05:48.442 real 0m2.680s 00:05:48.442 user 0m1.369s 00:05:48.442 sys 0m0.248s 00:05:48.442 02:22:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.442 02:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:48.442 ************************************ 00:05:48.442 END TEST locking_overlapped_coremask_via_rpc 00:05:48.442 ************************************ 00:05:48.718 02:22:29 -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.719 02:22:29 -- event/cpu_locks.sh@15 -- # [[ -z 57988 ]] 00:05:48.719 02:22:29 -- event/cpu_locks.sh@15 -- # killprocess 57988 00:05:48.719 02:22:29 -- common/autotest_common.sh@936 -- # '[' -z 57988 ']' 00:05:48.719 02:22:29 -- common/autotest_common.sh@940 -- # kill -0 57988 00:05:48.719 02:22:29 -- common/autotest_common.sh@941 -- # uname 00:05:48.719 02:22:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.719 02:22:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57988 00:05:48.719 02:22:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.719 02:22:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.719 02:22:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57988' 00:05:48.719 killing process with pid 57988 00:05:48.719 02:22:29 -- common/autotest_common.sh@955 -- # kill 57988 00:05:48.719 02:22:29 -- common/autotest_common.sh@960 -- # wait 57988 00:05:49.286 02:22:29 -- event/cpu_locks.sh@16 -- # [[ -z 58018 ]] 00:05:49.286 02:22:29 -- event/cpu_locks.sh@16 -- # killprocess 58018 00:05:49.286 02:22:29 -- common/autotest_common.sh@936 -- # '[' -z 58018 ']' 00:05:49.286 02:22:29 -- common/autotest_common.sh@940 -- # kill -0 58018 00:05:49.286 02:22:29 -- common/autotest_common.sh@941 -- # uname 00:05:49.287 02:22:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.287 02:22:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58018 00:05:49.287 02:22:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:49.287 02:22:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:49.287 killing process with pid 58018 00:05:49.287 02:22:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58018' 00:05:49.287 02:22:29 -- common/autotest_common.sh@955 -- # kill 58018 00:05:49.287 02:22:29 -- common/autotest_common.sh@960 -- # wait 58018 00:05:49.853 02:22:30 -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.853 02:22:30 -- event/cpu_locks.sh@1 -- # cleanup 00:05:49.853 02:22:30 -- event/cpu_locks.sh@15 -- # [[ -z 57988 ]] 00:05:49.853 02:22:30 -- event/cpu_locks.sh@15 -- # killprocess 57988 00:05:49.853 02:22:30 -- common/autotest_common.sh@936 -- # '[' -z 57988 ']' 00:05:49.853 02:22:30 -- common/autotest_common.sh@940 -- # kill -0 57988 00:05:49.853 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (57988) - No such process 00:05:49.853 Process with pid 57988 is not found 00:05:49.853 02:22:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 57988 is not found' 00:05:49.853 02:22:30 -- event/cpu_locks.sh@16 -- # [[ -z 58018 ]] 00:05:49.853 02:22:30 -- event/cpu_locks.sh@16 -- # killprocess 58018 00:05:49.853 02:22:30 -- common/autotest_common.sh@936 -- # '[' -z 58018 ']' 00:05:49.853 02:22:30 -- common/autotest_common.sh@940 -- # kill -0 58018 00:05:49.853 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58018) - No such process 00:05:49.853 Process with pid 58018 is not found 00:05:49.853 02:22:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58018 is not found' 00:05:49.853 02:22:30 -- event/cpu_locks.sh@18 -- # rm -f 00:05:49.853 00:05:49.853 real 0m22.386s 00:05:49.853 user 0m38.462s 00:05:49.853 sys 0m6.106s 00:05:49.853 02:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.853 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.853 ************************************ 00:05:49.853 END TEST cpu_locks 00:05:49.853 ************************************ 00:05:49.853 00:05:49.853 real 0m51.392s 00:05:49.853 user 1m38.003s 00:05:49.853 sys 0m10.273s 00:05:49.853 02:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.853 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.853 ************************************ 00:05:49.853 END TEST event 00:05:49.853 ************************************ 00:05:49.853 02:22:30 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:49.853 02:22:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.853 02:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.853 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:49.853 ************************************ 00:05:49.853 START TEST thread 00:05:49.853 ************************************ 00:05:49.853 02:22:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:50.112 * Looking for test storage... 00:05:50.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:50.112 02:22:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:50.112 02:22:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:50.112 02:22:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:50.112 02:22:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:50.112 02:22:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:50.112 02:22:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:50.112 02:22:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:50.112 02:22:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:50.112 02:22:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:50.112 02:22:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.112 02:22:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:50.112 02:22:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:50.112 02:22:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:50.112 02:22:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:50.112 02:22:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:50.112 02:22:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:50.112 02:22:30 -- scripts/common.sh@344 -- # : 1 00:05:50.112 02:22:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:50.112 02:22:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.112 02:22:30 -- scripts/common.sh@364 -- # decimal 1 00:05:50.112 02:22:30 -- scripts/common.sh@352 -- # local d=1 00:05:50.112 02:22:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.112 02:22:30 -- scripts/common.sh@354 -- # echo 1 00:05:50.112 02:22:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:50.112 02:22:30 -- scripts/common.sh@365 -- # decimal 2 00:05:50.112 02:22:30 -- scripts/common.sh@352 -- # local d=2 00:05:50.112 02:22:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.112 02:22:30 -- scripts/common.sh@354 -- # echo 2 00:05:50.112 02:22:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:50.112 02:22:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:50.112 02:22:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:50.112 02:22:30 -- scripts/common.sh@367 -- # return 0 00:05:50.112 02:22:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.112 02:22:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.112 --rc genhtml_branch_coverage=1 00:05:50.112 --rc genhtml_function_coverage=1 00:05:50.112 --rc genhtml_legend=1 00:05:50.112 --rc geninfo_all_blocks=1 00:05:50.112 --rc geninfo_unexecuted_blocks=1 00:05:50.112 00:05:50.112 ' 00:05:50.112 02:22:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.112 --rc genhtml_branch_coverage=1 00:05:50.112 --rc genhtml_function_coverage=1 00:05:50.112 --rc genhtml_legend=1 00:05:50.112 --rc geninfo_all_blocks=1 00:05:50.112 --rc geninfo_unexecuted_blocks=1 00:05:50.112 00:05:50.112 ' 00:05:50.112 02:22:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.112 --rc genhtml_branch_coverage=1 00:05:50.112 --rc genhtml_function_coverage=1 00:05:50.112 --rc genhtml_legend=1 00:05:50.112 --rc geninfo_all_blocks=1 00:05:50.112 --rc geninfo_unexecuted_blocks=1 00:05:50.112 00:05:50.112 ' 00:05:50.112 02:22:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:50.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.112 --rc genhtml_branch_coverage=1 00:05:50.112 --rc genhtml_function_coverage=1 00:05:50.112 --rc genhtml_legend=1 00:05:50.112 --rc geninfo_all_blocks=1 00:05:50.112 --rc geninfo_unexecuted_blocks=1 00:05:50.112 00:05:50.112 ' 00:05:50.112 02:22:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.112 02:22:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:50.113 02:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.113 02:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:50.113 ************************************ 00:05:50.113 START TEST thread_poller_perf 00:05:50.113 ************************************ 00:05:50.113 02:22:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.113 [2024-11-21 02:22:30.654238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.113 [2024-11-21 02:22:30.654316] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58187 ] 00:05:50.371 [2024-11-21 02:22:30.789700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.371 [2024-11-21 02:22:30.922785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.371 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:51.747 [2024-11-21T02:22:32.395Z] ====================================== 00:05:51.748 [2024-11-21T02:22:32.395Z] busy:2210636428 (cyc) 00:05:51.748 [2024-11-21T02:22:32.395Z] total_run_count: 372000 00:05:51.748 [2024-11-21T02:22:32.395Z] tsc_hz: 2200000000 (cyc) 00:05:51.748 [2024-11-21T02:22:32.395Z] ====================================== 00:05:51.748 [2024-11-21T02:22:32.395Z] poller_cost: 5942 (cyc), 2700 (nsec) 00:05:51.748 00:05:51.748 real 0m1.430s 00:05:51.748 user 0m1.260s 00:05:51.748 sys 0m0.062s 00:05:51.748 02:22:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.748 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.748 ************************************ 00:05:51.748 END TEST thread_poller_perf 00:05:51.748 ************************************ 00:05:51.748 02:22:32 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.748 02:22:32 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:51.748 02:22:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.748 02:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.748 ************************************ 00:05:51.748 START TEST thread_poller_perf 00:05:51.748 ************************************ 00:05:51.748 02:22:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.748 [2024-11-21 02:22:32.142502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.748 [2024-11-21 02:22:32.142594] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:05:51.748 [2024-11-21 02:22:32.265351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.748 [2024-11-21 02:22:32.361539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.748 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:53.126 [2024-11-21T02:22:33.773Z] ====================================== 00:05:53.126 [2024-11-21T02:22:33.773Z] busy:2202975456 (cyc) 00:05:53.126 [2024-11-21T02:22:33.773Z] total_run_count: 5292000 00:05:53.126 [2024-11-21T02:22:33.773Z] tsc_hz: 2200000000 (cyc) 00:05:53.126 [2024-11-21T02:22:33.773Z] ====================================== 00:05:53.126 [2024-11-21T02:22:33.773Z] poller_cost: 416 (cyc), 189 (nsec) 00:05:53.126 00:05:53.126 real 0m1.374s 00:05:53.126 user 0m1.204s 00:05:53.126 sys 0m0.063s 00:05:53.126 02:22:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.126 02:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 ************************************ 00:05:53.126 END TEST thread_poller_perf 00:05:53.126 ************************************ 00:05:53.126 02:22:33 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:53.126 00:05:53.126 real 0m3.109s 00:05:53.126 user 0m2.622s 00:05:53.126 sys 0m0.263s 00:05:53.126 02:22:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.126 02:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 ************************************ 00:05:53.126 END TEST thread 00:05:53.126 ************************************ 00:05:53.126 02:22:33 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:53.126 02:22:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.126 02:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.126 02:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.126 ************************************ 00:05:53.126 START TEST accel 00:05:53.126 ************************************ 00:05:53.127 02:22:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:53.127 * Looking for test storage... 00:05:53.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:53.127 02:22:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.127 02:22:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.127 02:22:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.127 02:22:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.127 02:22:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.127 02:22:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.127 02:22:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.127 02:22:33 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.127 02:22:33 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.127 02:22:33 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.127 02:22:33 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.127 02:22:33 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.127 02:22:33 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.127 02:22:33 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.127 02:22:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.127 02:22:33 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.127 02:22:33 -- scripts/common.sh@344 -- # : 1 00:05:53.127 02:22:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.127 02:22:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.127 02:22:33 -- scripts/common.sh@364 -- # decimal 1 00:05:53.127 02:22:33 -- scripts/common.sh@352 -- # local d=1 00:05:53.127 02:22:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.127 02:22:33 -- scripts/common.sh@354 -- # echo 1 00:05:53.127 02:22:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.127 02:22:33 -- scripts/common.sh@365 -- # decimal 2 00:05:53.127 02:22:33 -- scripts/common.sh@352 -- # local d=2 00:05:53.127 02:22:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.127 02:22:33 -- scripts/common.sh@354 -- # echo 2 00:05:53.127 02:22:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.127 02:22:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.127 02:22:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.127 02:22:33 -- scripts/common.sh@367 -- # return 0 00:05:53.127 02:22:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.127 02:22:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.127 --rc genhtml_branch_coverage=1 00:05:53.127 --rc genhtml_function_coverage=1 00:05:53.127 --rc genhtml_legend=1 00:05:53.127 --rc geninfo_all_blocks=1 00:05:53.127 --rc geninfo_unexecuted_blocks=1 00:05:53.127 00:05:53.127 ' 00:05:53.127 02:22:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.127 --rc genhtml_branch_coverage=1 00:05:53.127 --rc genhtml_function_coverage=1 00:05:53.127 --rc genhtml_legend=1 00:05:53.127 --rc geninfo_all_blocks=1 00:05:53.127 --rc geninfo_unexecuted_blocks=1 00:05:53.127 00:05:53.127 ' 00:05:53.127 02:22:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.127 --rc genhtml_branch_coverage=1 00:05:53.127 --rc genhtml_function_coverage=1 00:05:53.127 --rc genhtml_legend=1 00:05:53.127 --rc geninfo_all_blocks=1 00:05:53.127 --rc geninfo_unexecuted_blocks=1 00:05:53.127 00:05:53.127 ' 00:05:53.127 02:22:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.127 --rc genhtml_branch_coverage=1 00:05:53.127 --rc genhtml_function_coverage=1 00:05:53.127 --rc genhtml_legend=1 00:05:53.127 --rc geninfo_all_blocks=1 00:05:53.127 --rc geninfo_unexecuted_blocks=1 00:05:53.127 00:05:53.127 ' 00:05:53.127 02:22:33 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:53.127 02:22:33 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:53.127 02:22:33 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.127 02:22:33 -- accel/accel.sh@59 -- # spdk_tgt_pid=58304 00:05:53.127 02:22:33 -- accel/accel.sh@60 -- # waitforlisten 58304 00:05:53.127 02:22:33 -- common/autotest_common.sh@829 -- # '[' -z 58304 ']' 00:05:53.127 02:22:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.127 02:22:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.127 02:22:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.127 02:22:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.127 02:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.127 02:22:33 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:53.127 02:22:33 -- accel/accel.sh@58 -- # build_accel_config 00:05:53.127 02:22:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.127 02:22:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.127 02:22:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.127 02:22:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.127 02:22:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.127 02:22:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.127 02:22:33 -- accel/accel.sh@42 -- # jq -r . 00:05:53.386 [2024-11-21 02:22:33.804655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.386 [2024-11-21 02:22:33.804785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58304 ] 00:05:53.386 [2024-11-21 02:22:33.935942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.644 [2024-11-21 02:22:34.060072] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.644 [2024-11-21 02:22:34.060304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.211 02:22:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.212 02:22:34 -- common/autotest_common.sh@862 -- # return 0 00:05:54.212 02:22:34 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:54.212 02:22:34 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:54.212 02:22:34 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:54.212 02:22:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.212 02:22:34 -- common/autotest_common.sh@10 -- # set +x 00:05:54.470 02:22:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.470 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.470 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.470 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.471 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.471 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.471 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.471 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.471 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.471 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.471 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.471 02:22:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # IFS== 00:05:54.471 02:22:34 -- accel/accel.sh@64 -- # read -r opc module 00:05:54.471 02:22:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:54.471 02:22:34 -- accel/accel.sh@67 -- # killprocess 58304 00:05:54.471 02:22:34 -- common/autotest_common.sh@936 -- # '[' -z 58304 ']' 00:05:54.471 02:22:34 -- common/autotest_common.sh@940 -- # kill -0 58304 00:05:54.471 02:22:34 -- common/autotest_common.sh@941 -- # uname 00:05:54.471 02:22:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.471 02:22:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58304 00:05:54.471 02:22:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.471 02:22:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.471 killing process with pid 58304 00:05:54.471 02:22:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58304' 00:05:54.471 02:22:34 -- common/autotest_common.sh@955 -- # kill 58304 00:05:54.471 02:22:34 -- common/autotest_common.sh@960 -- # wait 58304 00:05:55.038 02:22:35 -- accel/accel.sh@68 -- # trap - ERR 00:05:55.039 02:22:35 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:55.039 02:22:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:55.039 02:22:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.039 02:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.039 02:22:35 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:55.039 02:22:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:55.039 02:22:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.039 02:22:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.039 02:22:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.039 02:22:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.039 02:22:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.039 02:22:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.039 02:22:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.039 02:22:35 -- accel/accel.sh@42 -- # jq -r . 00:05:55.039 02:22:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.039 02:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.039 02:22:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:55.039 02:22:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:55.039 02:22:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.039 02:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:55.039 ************************************ 00:05:55.039 START TEST accel_missing_filename 00:05:55.039 ************************************ 00:05:55.039 02:22:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:55.039 02:22:35 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.039 02:22:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:55.039 02:22:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:55.039 02:22:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.039 02:22:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:55.039 02:22:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.039 02:22:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:55.039 02:22:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:55.039 02:22:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.039 02:22:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.039 02:22:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.039 02:22:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.039 02:22:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.039 02:22:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.039 02:22:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.039 02:22:35 -- accel/accel.sh@42 -- # jq -r . 00:05:55.039 [2024-11-21 02:22:35.607685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.039 [2024-11-21 02:22:35.607808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58369 ] 00:05:55.298 [2024-11-21 02:22:35.746447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.298 [2024-11-21 02:22:35.852727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.298 [2024-11-21 02:22:35.929827] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.556 [2024-11-21 02:22:36.034600] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:55.556 A filename is required. 00:05:55.556 02:22:36 -- common/autotest_common.sh@653 -- # es=234 00:05:55.556 02:22:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.556 02:22:36 -- common/autotest_common.sh@662 -- # es=106 00:05:55.556 02:22:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:55.556 02:22:36 -- common/autotest_common.sh@670 -- # es=1 00:05:55.556 02:22:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.556 00:05:55.556 real 0m0.574s 00:05:55.556 user 0m0.374s 00:05:55.556 sys 0m0.143s 00:05:55.556 02:22:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.556 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.556 ************************************ 00:05:55.556 END TEST accel_missing_filename 00:05:55.556 ************************************ 00:05:55.557 02:22:36 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:55.557 02:22:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:55.557 02:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.557 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.815 ************************************ 00:05:55.815 START TEST accel_compress_verify 00:05:55.815 ************************************ 00:05:55.815 02:22:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:55.815 02:22:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:55.815 02:22:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:55.815 02:22:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:55.815 02:22:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.816 02:22:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:55.816 02:22:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.816 02:22:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:55.816 02:22:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:55.816 02:22:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.816 02:22:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.816 02:22:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.816 02:22:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.816 02:22:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.816 02:22:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.816 02:22:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.816 02:22:36 -- accel/accel.sh@42 -- # jq -r . 00:05:55.816 [2024-11-21 02:22:36.231026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.816 [2024-11-21 02:22:36.231125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ] 00:05:55.816 [2024-11-21 02:22:36.366843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.816 [2024-11-21 02:22:36.444096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.074 [2024-11-21 02:22:36.513717] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.074 [2024-11-21 02:22:36.615083] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:56.333 00:05:56.333 Compression does not support the verify option, aborting. 00:05:56.333 02:22:36 -- common/autotest_common.sh@653 -- # es=161 00:05:56.333 02:22:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.333 02:22:36 -- common/autotest_common.sh@662 -- # es=33 00:05:56.333 02:22:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:56.333 02:22:36 -- common/autotest_common.sh@670 -- # es=1 00:05:56.333 02:22:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.333 00:05:56.333 real 0m0.532s 00:05:56.333 user 0m0.348s 00:05:56.333 sys 0m0.130s 00:05:56.333 02:22:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.333 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.333 ************************************ 00:05:56.333 END TEST accel_compress_verify 00:05:56.333 ************************************ 00:05:56.333 02:22:36 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:56.333 02:22:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:56.333 02:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.333 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 ************************************ 00:05:56.334 START TEST accel_wrong_workload 00:05:56.334 ************************************ 00:05:56.334 02:22:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:56.334 02:22:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.334 02:22:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:56.334 02:22:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:56.334 02:22:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.334 02:22:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:56.334 02:22:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.334 02:22:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:56.334 02:22:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:56.334 02:22:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.334 02:22:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.334 02:22:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.334 02:22:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.334 02:22:36 -- accel/accel.sh@42 -- # jq -r . 00:05:56.334 Unsupported workload type: foobar 00:05:56.334 [2024-11-21 02:22:36.818668] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:56.334 accel_perf options: 00:05:56.334 [-h help message] 00:05:56.334 [-q queue depth per core] 00:05:56.334 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:56.334 [-T number of threads per core 00:05:56.334 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:56.334 [-t time in seconds] 00:05:56.334 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:56.334 [ dif_verify, , dif_generate, dif_generate_copy 00:05:56.334 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:56.334 [-l for compress/decompress workloads, name of uncompressed input file 00:05:56.334 [-S for crc32c workload, use this seed value (default 0) 00:05:56.334 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:56.334 [-f for fill workload, use this BYTE value (default 255) 00:05:56.334 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:56.334 [-y verify result if this switch is on] 00:05:56.334 [-a tasks to allocate per core (default: same value as -q)] 00:05:56.334 Can be used to spread operations across a wider range of memory. 00:05:56.334 02:22:36 -- common/autotest_common.sh@653 -- # es=1 00:05:56.334 02:22:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.334 02:22:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.334 02:22:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.334 00:05:56.334 real 0m0.040s 00:05:56.334 user 0m0.020s 00:05:56.334 sys 0m0.020s 00:05:56.334 02:22:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.334 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 ************************************ 00:05:56.334 END TEST accel_wrong_workload 00:05:56.334 ************************************ 00:05:56.334 02:22:36 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:56.334 02:22:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:56.334 02:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.334 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 ************************************ 00:05:56.334 START TEST accel_negative_buffers 00:05:56.334 ************************************ 00:05:56.334 02:22:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:56.334 02:22:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:56.334 02:22:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:56.334 02:22:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:56.334 02:22:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.334 02:22:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:56.334 02:22:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:56.334 02:22:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:56.334 02:22:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:56.334 02:22:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.334 02:22:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.334 02:22:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.334 02:22:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.334 02:22:36 -- accel/accel.sh@42 -- # jq -r . 00:05:56.334 -x option must be non-negative. 00:05:56.334 [2024-11-21 02:22:36.909619] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:56.334 accel_perf options: 00:05:56.334 [-h help message] 00:05:56.334 [-q queue depth per core] 00:05:56.334 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:56.334 [-T number of threads per core 00:05:56.334 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:56.334 [-t time in seconds] 00:05:56.334 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:56.334 [ dif_verify, , dif_generate, dif_generate_copy 00:05:56.334 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:56.334 [-l for compress/decompress workloads, name of uncompressed input file 00:05:56.334 [-S for crc32c workload, use this seed value (default 0) 00:05:56.334 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:56.334 [-f for fill workload, use this BYTE value (default 255) 00:05:56.334 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:56.334 [-y verify result if this switch is on] 00:05:56.334 [-a tasks to allocate per core (default: same value as -q)] 00:05:56.334 Can be used to spread operations across a wider range of memory. 00:05:56.334 02:22:36 -- common/autotest_common.sh@653 -- # es=1 00:05:56.334 02:22:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:56.334 02:22:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:56.334 02:22:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:56.334 00:05:56.334 real 0m0.032s 00:05:56.334 user 0m0.019s 00:05:56.334 sys 0m0.013s 00:05:56.334 02:22:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.334 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 ************************************ 00:05:56.334 END TEST accel_negative_buffers 00:05:56.334 ************************************ 00:05:56.334 02:22:36 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:56.334 02:22:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:56.334 02:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.334 02:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.334 ************************************ 00:05:56.334 START TEST accel_crc32c 00:05:56.334 ************************************ 00:05:56.334 02:22:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:56.334 02:22:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.334 02:22:36 -- accel/accel.sh@17 -- # local accel_module 00:05:56.334 02:22:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:56.334 02:22:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:56.334 02:22:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.334 02:22:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.334 02:22:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.334 02:22:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.334 02:22:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.334 02:22:36 -- accel/accel.sh@42 -- # jq -r . 00:05:56.593 [2024-11-21 02:22:36.986937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.593 [2024-11-21 02:22:36.987020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58463 ] 00:05:56.593 [2024-11-21 02:22:37.120158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.593 [2024-11-21 02:22:37.229944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.965 02:22:38 -- accel/accel.sh@18 -- # out=' 00:05:57.965 SPDK Configuration: 00:05:57.965 Core mask: 0x1 00:05:57.965 00:05:57.965 Accel Perf Configuration: 00:05:57.965 Workload Type: crc32c 00:05:57.965 CRC-32C seed: 32 00:05:57.965 Transfer size: 4096 bytes 00:05:57.965 Vector count 1 00:05:57.965 Module: software 00:05:57.965 Queue depth: 32 00:05:57.965 Allocate depth: 32 00:05:57.965 # threads/core: 1 00:05:57.965 Run time: 1 seconds 00:05:57.965 Verify: Yes 00:05:57.965 00:05:57.966 Running for 1 seconds... 00:05:57.966 00:05:57.966 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.966 ------------------------------------------------------------------------------------ 00:05:57.966 0,0 499904/s 1952 MiB/s 0 0 00:05:57.966 ==================================================================================== 00:05:57.966 Total 499904/s 1952 MiB/s 0 0' 00:05:57.966 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:57.966 02:22:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:57.966 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:57.966 02:22:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.966 02:22:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:57.966 02:22:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.966 02:22:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.966 02:22:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.966 02:22:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.966 02:22:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.966 02:22:38 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.966 02:22:38 -- accel/accel.sh@42 -- # jq -r . 00:05:57.966 [2024-11-21 02:22:38.519871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.966 [2024-11-21 02:22:38.519976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58477 ] 00:05:58.224 [2024-11-21 02:22:38.656108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.224 [2024-11-21 02:22:38.767247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=0x1 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=crc32c 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=32 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=software 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=32 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=32 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=1 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val=Yes 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:58.224 02:22:38 -- accel/accel.sh@21 -- # val= 00:05:58.224 02:22:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # IFS=: 00:05:58.224 02:22:38 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@21 -- # val= 00:05:59.604 02:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@21 -- # val= 00:05:59.604 02:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@21 -- # val= 00:05:59.604 02:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@21 -- # val= 00:05:59.604 02:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@21 -- # val= 00:05:59.604 02:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@21 -- # val= 00:05:59.604 02:22:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # IFS=: 00:05:59.604 02:22:40 -- accel/accel.sh@20 -- # read -r var val 00:05:59.604 02:22:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:59.604 02:22:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:59.604 02:22:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.604 00:05:59.604 real 0m3.109s 00:05:59.604 user 0m2.653s 00:05:59.604 sys 0m0.255s 00:05:59.604 02:22:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.604 02:22:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.604 ************************************ 00:05:59.604 END TEST accel_crc32c 00:05:59.604 ************************************ 00:05:59.604 02:22:40 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:59.604 02:22:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:59.604 02:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.604 02:22:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.604 ************************************ 00:05:59.604 START TEST accel_crc32c_C2 00:05:59.604 ************************************ 00:05:59.604 02:22:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:59.604 02:22:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.604 02:22:40 -- accel/accel.sh@17 -- # local accel_module 00:05:59.604 02:22:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:59.604 02:22:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:59.604 02:22:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.604 02:22:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.604 02:22:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.604 02:22:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.604 02:22:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.604 02:22:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.604 02:22:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.604 02:22:40 -- accel/accel.sh@42 -- # jq -r . 00:05:59.604 [2024-11-21 02:22:40.157779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.604 [2024-11-21 02:22:40.158035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58517 ] 00:05:59.862 [2024-11-21 02:22:40.293839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.862 [2024-11-21 02:22:40.384173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.238 02:22:41 -- accel/accel.sh@18 -- # out=' 00:06:01.238 SPDK Configuration: 00:06:01.238 Core mask: 0x1 00:06:01.238 00:06:01.238 Accel Perf Configuration: 00:06:01.238 Workload Type: crc32c 00:06:01.238 CRC-32C seed: 0 00:06:01.238 Transfer size: 4096 bytes 00:06:01.238 Vector count 2 00:06:01.238 Module: software 00:06:01.238 Queue depth: 32 00:06:01.238 Allocate depth: 32 00:06:01.239 # threads/core: 1 00:06:01.239 Run time: 1 seconds 00:06:01.239 Verify: Yes 00:06:01.239 00:06:01.239 Running for 1 seconds... 00:06:01.239 00:06:01.239 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.239 ------------------------------------------------------------------------------------ 00:06:01.239 0,0 419456/s 3277 MiB/s 0 0 00:06:01.239 ==================================================================================== 00:06:01.239 Total 419456/s 1638 MiB/s 0 0' 00:06:01.239 02:22:41 -- accel/accel.sh@20 -- # IFS=: 00:06:01.239 02:22:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.239 02:22:41 -- accel/accel.sh@20 -- # read -r var val 00:06:01.239 02:22:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.239 02:22:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.239 02:22:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.239 02:22:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.239 02:22:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.239 02:22:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.239 02:22:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.239 02:22:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.239 02:22:41 -- accel/accel.sh@42 -- # jq -r . 00:06:01.239 [2024-11-21 02:22:41.726047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.239 [2024-11-21 02:22:41.726146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58531 ] 00:06:01.239 [2024-11-21 02:22:41.865569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.497 [2024-11-21 02:22:41.981499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.497 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.497 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.497 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.497 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.497 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.497 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.497 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.497 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.497 02:22:42 -- accel/accel.sh@21 -- # val=0x1 00:06:01.497 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.497 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=crc32c 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=0 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=software 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=32 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=32 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=1 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val=Yes 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:01.498 02:22:42 -- accel/accel.sh@21 -- # val= 00:06:01.498 02:22:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # IFS=: 00:06:01.498 02:22:42 -- accel/accel.sh@20 -- # read -r var val 00:06:02.891 02:22:43 -- accel/accel.sh@21 -- # val= 00:06:02.891 02:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.891 02:22:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.892 02:22:43 -- accel/accel.sh@21 -- # val= 00:06:02.892 02:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.892 02:22:43 -- accel/accel.sh@21 -- # val= 00:06:02.892 02:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.892 02:22:43 -- accel/accel.sh@21 -- # val= 00:06:02.892 02:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.892 02:22:43 -- accel/accel.sh@21 -- # val= 00:06:02.892 02:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.892 02:22:43 -- accel/accel.sh@21 -- # val= 00:06:02.892 02:22:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # IFS=: 00:06:02.892 02:22:43 -- accel/accel.sh@20 -- # read -r var val 00:06:02.892 ************************************ 00:06:02.892 END TEST accel_crc32c_C2 00:06:02.892 ************************************ 00:06:02.892 02:22:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.892 02:22:43 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:02.892 02:22:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.892 00:06:02.892 real 0m3.171s 00:06:02.892 user 0m2.692s 00:06:02.892 sys 0m0.274s 00:06:02.892 02:22:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.892 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.892 02:22:43 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:02.892 02:22:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:02.892 02:22:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.892 02:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.892 ************************************ 00:06:02.892 START TEST accel_copy 00:06:02.892 ************************************ 00:06:02.892 02:22:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:02.892 02:22:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.892 02:22:43 -- accel/accel.sh@17 -- # local accel_module 00:06:02.892 02:22:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:02.892 02:22:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:02.892 02:22:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.892 02:22:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.892 02:22:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.892 02:22:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.892 02:22:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.892 02:22:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.892 02:22:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.892 02:22:43 -- accel/accel.sh@42 -- # jq -r . 00:06:02.892 [2024-11-21 02:22:43.384221] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.892 [2024-11-21 02:22:43.384546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58571 ] 00:06:02.892 [2024-11-21 02:22:43.519977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.151 [2024-11-21 02:22:43.638712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.528 02:22:44 -- accel/accel.sh@18 -- # out=' 00:06:04.528 SPDK Configuration: 00:06:04.528 Core mask: 0x1 00:06:04.528 00:06:04.528 Accel Perf Configuration: 00:06:04.528 Workload Type: copy 00:06:04.528 Transfer size: 4096 bytes 00:06:04.528 Vector count 1 00:06:04.528 Module: software 00:06:04.528 Queue depth: 32 00:06:04.528 Allocate depth: 32 00:06:04.528 # threads/core: 1 00:06:04.528 Run time: 1 seconds 00:06:04.528 Verify: Yes 00:06:04.528 00:06:04.528 Running for 1 seconds... 00:06:04.528 00:06:04.528 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.528 ------------------------------------------------------------------------------------ 00:06:04.528 0,0 375808/s 1468 MiB/s 0 0 00:06:04.529 ==================================================================================== 00:06:04.529 Total 375808/s 1468 MiB/s 0 0' 00:06:04.529 02:22:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:04.529 02:22:44 -- accel/accel.sh@20 -- # IFS=: 00:06:04.529 02:22:44 -- accel/accel.sh@20 -- # read -r var val 00:06:04.529 02:22:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:04.529 02:22:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.529 02:22:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.529 02:22:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.529 02:22:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.529 02:22:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.529 02:22:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.529 02:22:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.529 02:22:44 -- accel/accel.sh@42 -- # jq -r . 00:06:04.529 [2024-11-21 02:22:44.970592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.529 [2024-11-21 02:22:44.970689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58591 ] 00:06:04.529 [2024-11-21 02:22:45.099205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.788 [2024-11-21 02:22:45.199289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=0x1 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=copy 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=software 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=32 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=32 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=1 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val=Yes 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:04.788 02:22:45 -- accel/accel.sh@21 -- # val= 00:06:04.788 02:22:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # IFS=: 00:06:04.788 02:22:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.165 02:22:46 -- accel/accel.sh@21 -- # val= 00:06:06.165 02:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.165 02:22:46 -- accel/accel.sh@20 -- # IFS=: 00:06:06.165 02:22:46 -- accel/accel.sh@20 -- # read -r var val 00:06:06.165 02:22:46 -- accel/accel.sh@21 -- # val= 00:06:06.165 02:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.165 02:22:46 -- accel/accel.sh@20 -- # IFS=: 00:06:06.165 02:22:46 -- accel/accel.sh@20 -- # read -r var val 00:06:06.165 02:22:46 -- accel/accel.sh@21 -- # val= 00:06:06.165 02:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.165 02:22:46 -- accel/accel.sh@20 -- # IFS=: 00:06:06.165 02:22:46 -- accel/accel.sh@20 -- # read -r var val 00:06:06.165 02:22:46 -- accel/accel.sh@21 -- # val= 00:06:06.166 02:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.166 02:22:46 -- accel/accel.sh@20 -- # IFS=: 00:06:06.166 02:22:46 -- accel/accel.sh@20 -- # read -r var val 00:06:06.166 02:22:46 -- accel/accel.sh@21 -- # val= 00:06:06.166 02:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.166 02:22:46 -- accel/accel.sh@20 -- # IFS=: 00:06:06.166 02:22:46 -- accel/accel.sh@20 -- # read -r var val 00:06:06.166 02:22:46 -- accel/accel.sh@21 -- # val= 00:06:06.166 02:22:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.166 02:22:46 -- accel/accel.sh@20 -- # IFS=: 00:06:06.166 02:22:46 -- accel/accel.sh@20 -- # read -r var val 00:06:06.166 02:22:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.166 02:22:46 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:06.166 02:22:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.166 00:06:06.166 real 0m3.160s 00:06:06.166 user 0m2.671s 00:06:06.166 sys 0m0.280s 00:06:06.166 02:22:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.166 02:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:06.166 ************************************ 00:06:06.166 END TEST accel_copy 00:06:06.166 ************************************ 00:06:06.166 02:22:46 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.166 02:22:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:06.166 02:22:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.166 02:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:06.166 ************************************ 00:06:06.166 START TEST accel_fill 00:06:06.166 ************************************ 00:06:06.166 02:22:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.166 02:22:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.166 02:22:46 -- accel/accel.sh@17 -- # local accel_module 00:06:06.166 02:22:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.166 02:22:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.166 02:22:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.166 02:22:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.166 02:22:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.166 02:22:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.166 02:22:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.166 02:22:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.166 02:22:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.166 02:22:46 -- accel/accel.sh@42 -- # jq -r . 00:06:06.166 [2024-11-21 02:22:46.598605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.166 [2024-11-21 02:22:46.599460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58625 ] 00:06:06.166 [2024-11-21 02:22:46.737638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.424 [2024-11-21 02:22:46.844261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.799 02:22:48 -- accel/accel.sh@18 -- # out=' 00:06:07.799 SPDK Configuration: 00:06:07.799 Core mask: 0x1 00:06:07.799 00:06:07.799 Accel Perf Configuration: 00:06:07.799 Workload Type: fill 00:06:07.799 Fill pattern: 0x80 00:06:07.799 Transfer size: 4096 bytes 00:06:07.799 Vector count 1 00:06:07.799 Module: software 00:06:07.799 Queue depth: 64 00:06:07.799 Allocate depth: 64 00:06:07.799 # threads/core: 1 00:06:07.799 Run time: 1 seconds 00:06:07.799 Verify: Yes 00:06:07.799 00:06:07.799 Running for 1 seconds... 00:06:07.799 00:06:07.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.799 ------------------------------------------------------------------------------------ 00:06:07.799 0,0 538624/s 2104 MiB/s 0 0 00:06:07.799 ==================================================================================== 00:06:07.799 Total 538624/s 2104 MiB/s 0 0' 00:06:07.799 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:07.799 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:07.799 02:22:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.799 02:22:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:07.799 02:22:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.799 02:22:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.800 02:22:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.800 02:22:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.800 02:22:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.800 02:22:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.800 02:22:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.800 02:22:48 -- accel/accel.sh@42 -- # jq -r . 00:06:07.800 [2024-11-21 02:22:48.200944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.800 [2024-11-21 02:22:48.201092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58645 ] 00:06:07.800 [2024-11-21 02:22:48.346303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.800 [2024-11-21 02:22:48.425769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val=0x1 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val=fill 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val=0x80 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val=software 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.058 02:22:48 -- accel/accel.sh@21 -- # val=64 00:06:08.058 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.058 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 02:22:48 -- accel/accel.sh@21 -- # val=64 00:06:08.059 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 02:22:48 -- accel/accel.sh@21 -- # val=1 00:06:08.059 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 02:22:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.059 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 02:22:48 -- accel/accel.sh@21 -- # val=Yes 00:06:08.059 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.059 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:08.059 02:22:48 -- accel/accel.sh@21 -- # val= 00:06:08.059 02:22:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # IFS=: 00:06:08.059 02:22:48 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@21 -- # val= 00:06:09.492 02:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # IFS=: 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@21 -- # val= 00:06:09.492 02:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # IFS=: 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@21 -- # val= 00:06:09.492 02:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # IFS=: 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@21 -- # val= 00:06:09.492 02:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # IFS=: 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@21 -- # val= 00:06:09.492 02:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # IFS=: 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@21 -- # val= 00:06:09.492 02:22:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # IFS=: 00:06:09.492 02:22:49 -- accel/accel.sh@20 -- # read -r var val 00:06:09.492 02:22:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.492 02:22:49 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:09.492 02:22:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.492 00:06:09.492 real 0m3.197s 00:06:09.492 user 0m2.721s 00:06:09.492 sys 0m0.270s 00:06:09.492 02:22:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.492 ************************************ 00:06:09.492 END TEST accel_fill 00:06:09.492 02:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:09.492 ************************************ 00:06:09.492 02:22:49 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:09.492 02:22:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:09.492 02:22:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.492 02:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:09.492 ************************************ 00:06:09.492 START TEST accel_copy_crc32c 00:06:09.492 ************************************ 00:06:09.492 02:22:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:09.492 02:22:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.492 02:22:49 -- accel/accel.sh@17 -- # local accel_module 00:06:09.492 02:22:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:09.492 02:22:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:09.492 02:22:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.492 02:22:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.492 02:22:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.492 02:22:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.492 02:22:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.492 02:22:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.492 02:22:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.492 02:22:49 -- accel/accel.sh@42 -- # jq -r . 00:06:09.492 [2024-11-21 02:22:49.843013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.492 [2024-11-21 02:22:49.843136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ] 00:06:09.492 [2024-11-21 02:22:49.973818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.492 [2024-11-21 02:22:50.102453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.896 02:22:51 -- accel/accel.sh@18 -- # out=' 00:06:10.896 SPDK Configuration: 00:06:10.896 Core mask: 0x1 00:06:10.896 00:06:10.896 Accel Perf Configuration: 00:06:10.896 Workload Type: copy_crc32c 00:06:10.896 CRC-32C seed: 0 00:06:10.896 Vector size: 4096 bytes 00:06:10.896 Transfer size: 4096 bytes 00:06:10.896 Vector count 1 00:06:10.896 Module: software 00:06:10.896 Queue depth: 32 00:06:10.896 Allocate depth: 32 00:06:10.896 # threads/core: 1 00:06:10.896 Run time: 1 seconds 00:06:10.896 Verify: Yes 00:06:10.896 00:06:10.896 Running for 1 seconds... 00:06:10.896 00:06:10.896 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.896 ------------------------------------------------------------------------------------ 00:06:10.896 0,0 316064/s 1234 MiB/s 0 0 00:06:10.896 ==================================================================================== 00:06:10.896 Total 316064/s 1234 MiB/s 0 0' 00:06:10.896 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:10.896 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:10.896 02:22:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:10.896 02:22:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:10.896 02:22:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.896 02:22:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.896 02:22:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.896 02:22:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.896 02:22:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.896 02:22:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.896 02:22:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.896 02:22:51 -- accel/accel.sh@42 -- # jq -r . 00:06:10.896 [2024-11-21 02:22:51.440172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.896 [2024-11-21 02:22:51.440281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58699 ] 00:06:11.156 [2024-11-21 02:22:51.575613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.156 [2024-11-21 02:22:51.654323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=0x1 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=0 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=software 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=32 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=32 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=1 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val=Yes 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:11.156 02:22:51 -- accel/accel.sh@21 -- # val= 00:06:11.156 02:22:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # IFS=: 00:06:11.156 02:22:51 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@21 -- # val= 00:06:12.533 02:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # IFS=: 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@21 -- # val= 00:06:12.533 02:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # IFS=: 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@21 -- # val= 00:06:12.533 02:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # IFS=: 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@21 -- # val= 00:06:12.533 02:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # IFS=: 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@21 -- # val= 00:06:12.533 02:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # IFS=: 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@21 -- # val= 00:06:12.533 02:22:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # IFS=: 00:06:12.533 02:22:52 -- accel/accel.sh@20 -- # read -r var val 00:06:12.533 02:22:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.533 02:22:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:12.533 02:22:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.533 00:06:12.533 real 0m3.152s 00:06:12.533 user 0m2.671s 00:06:12.533 sys 0m0.281s 00:06:12.533 02:22:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.533 02:22:52 -- common/autotest_common.sh@10 -- # set +x 00:06:12.533 ************************************ 00:06:12.533 END TEST accel_copy_crc32c 00:06:12.533 ************************************ 00:06:12.533 02:22:53 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.533 02:22:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:12.533 02:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.533 02:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.533 ************************************ 00:06:12.533 START TEST accel_copy_crc32c_C2 00:06:12.533 ************************************ 00:06:12.533 02:22:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.533 02:22:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.533 02:22:53 -- accel/accel.sh@17 -- # local accel_module 00:06:12.533 02:22:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.533 02:22:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.533 02:22:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.533 02:22:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.533 02:22:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.533 02:22:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.533 02:22:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.533 02:22:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.533 02:22:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.533 02:22:53 -- accel/accel.sh@42 -- # jq -r . 00:06:12.533 [2024-11-21 02:22:53.051059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.533 [2024-11-21 02:22:53.051141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58733 ] 00:06:12.792 [2024-11-21 02:22:53.178611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.792 [2024-11-21 02:22:53.267098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.170 02:22:54 -- accel/accel.sh@18 -- # out=' 00:06:14.170 SPDK Configuration: 00:06:14.170 Core mask: 0x1 00:06:14.170 00:06:14.170 Accel Perf Configuration: 00:06:14.170 Workload Type: copy_crc32c 00:06:14.170 CRC-32C seed: 0 00:06:14.170 Vector size: 4096 bytes 00:06:14.170 Transfer size: 8192 bytes 00:06:14.170 Vector count 2 00:06:14.170 Module: software 00:06:14.170 Queue depth: 32 00:06:14.170 Allocate depth: 32 00:06:14.170 # threads/core: 1 00:06:14.170 Run time: 1 seconds 00:06:14.170 Verify: Yes 00:06:14.170 00:06:14.170 Running for 1 seconds... 00:06:14.170 00:06:14.170 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.170 ------------------------------------------------------------------------------------ 00:06:14.170 0,0 224928/s 1757 MiB/s 0 0 00:06:14.170 ==================================================================================== 00:06:14.170 Total 224928/s 878 MiB/s 0 0' 00:06:14.170 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.170 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.170 02:22:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:14.170 02:22:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:14.170 02:22:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.170 02:22:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.170 02:22:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.170 02:22:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.170 02:22:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.170 02:22:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.170 02:22:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.170 02:22:54 -- accel/accel.sh@42 -- # jq -r . 00:06:14.170 [2024-11-21 02:22:54.616120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.170 [2024-11-21 02:22:54.616235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58753 ] 00:06:14.170 [2024-11-21 02:22:54.754155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.430 [2024-11-21 02:22:54.831322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=0x1 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=0 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=software 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=32 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=32 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=1 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val=Yes 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:14.430 02:22:54 -- accel/accel.sh@21 -- # val= 00:06:14.430 02:22:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # IFS=: 00:06:14.430 02:22:54 -- accel/accel.sh@20 -- # read -r var val 00:06:15.807 02:22:56 -- accel/accel.sh@21 -- # val= 00:06:15.807 02:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # IFS=: 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.807 02:22:56 -- accel/accel.sh@21 -- # val= 00:06:15.807 02:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # IFS=: 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.807 02:22:56 -- accel/accel.sh@21 -- # val= 00:06:15.807 02:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # IFS=: 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.807 02:22:56 -- accel/accel.sh@21 -- # val= 00:06:15.807 02:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # IFS=: 00:06:15.807 02:22:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.807 02:22:56 -- accel/accel.sh@21 -- # val= 00:06:15.808 02:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.808 02:22:56 -- accel/accel.sh@20 -- # IFS=: 00:06:15.808 02:22:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.808 02:22:56 -- accel/accel.sh@21 -- # val= 00:06:15.808 02:22:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.808 02:22:56 -- accel/accel.sh@20 -- # IFS=: 00:06:15.808 02:22:56 -- accel/accel.sh@20 -- # read -r var val 00:06:15.808 02:22:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.808 02:22:56 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:15.808 02:22:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.808 00:06:15.808 real 0m3.114s 00:06:15.808 user 0m2.639s 00:06:15.808 sys 0m0.273s 00:06:15.808 02:22:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.808 02:22:56 -- common/autotest_common.sh@10 -- # set +x 00:06:15.808 ************************************ 00:06:15.808 END TEST accel_copy_crc32c_C2 00:06:15.808 ************************************ 00:06:15.808 02:22:56 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:15.808 02:22:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:15.808 02:22:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.808 02:22:56 -- common/autotest_common.sh@10 -- # set +x 00:06:15.808 ************************************ 00:06:15.808 START TEST accel_dualcast 00:06:15.808 ************************************ 00:06:15.808 02:22:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:15.808 02:22:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.808 02:22:56 -- accel/accel.sh@17 -- # local accel_module 00:06:15.808 02:22:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:15.808 02:22:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:15.808 02:22:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.808 02:22:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.808 02:22:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.808 02:22:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.808 02:22:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.808 02:22:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.808 02:22:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.808 02:22:56 -- accel/accel.sh@42 -- # jq -r . 00:06:15.808 [2024-11-21 02:22:56.212243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.808 [2024-11-21 02:22:56.212348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:06:15.808 [2024-11-21 02:22:56.344499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.808 [2024-11-21 02:22:56.423402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.183 02:22:57 -- accel/accel.sh@18 -- # out=' 00:06:17.183 SPDK Configuration: 00:06:17.183 Core mask: 0x1 00:06:17.183 00:06:17.183 Accel Perf Configuration: 00:06:17.183 Workload Type: dualcast 00:06:17.183 Transfer size: 4096 bytes 00:06:17.183 Vector count 1 00:06:17.183 Module: software 00:06:17.183 Queue depth: 32 00:06:17.183 Allocate depth: 32 00:06:17.183 # threads/core: 1 00:06:17.183 Run time: 1 seconds 00:06:17.183 Verify: Yes 00:06:17.183 00:06:17.183 Running for 1 seconds... 00:06:17.183 00:06:17.183 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.183 ------------------------------------------------------------------------------------ 00:06:17.183 0,0 437184/s 1707 MiB/s 0 0 00:06:17.183 ==================================================================================== 00:06:17.183 Total 437184/s 1707 MiB/s 0 0' 00:06:17.183 02:22:57 -- accel/accel.sh@20 -- # IFS=: 00:06:17.183 02:22:57 -- accel/accel.sh@20 -- # read -r var val 00:06:17.183 02:22:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:17.183 02:22:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:17.183 02:22:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.183 02:22:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.183 02:22:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.183 02:22:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.183 02:22:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.183 02:22:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.183 02:22:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.183 02:22:57 -- accel/accel.sh@42 -- # jq -r . 00:06:17.183 [2024-11-21 02:22:57.764616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.183 [2024-11-21 02:22:57.764773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58807 ] 00:06:17.442 [2024-11-21 02:22:57.901975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.442 [2024-11-21 02:22:57.976589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=0x1 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=dualcast 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=software 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=32 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=32 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=1 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val=Yes 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:17.442 02:22:58 -- accel/accel.sh@21 -- # val= 00:06:17.442 02:22:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # IFS=: 00:06:17.442 02:22:58 -- accel/accel.sh@20 -- # read -r var val 00:06:18.819 02:22:59 -- accel/accel.sh@21 -- # val= 00:06:18.819 02:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.819 02:22:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.819 02:22:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.819 02:22:59 -- accel/accel.sh@21 -- # val= 00:06:18.819 02:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.819 02:22:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.819 02:22:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.819 02:22:59 -- accel/accel.sh@21 -- # val= 00:06:18.819 02:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.819 02:22:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.819 02:22:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.820 02:22:59 -- accel/accel.sh@21 -- # val= 00:06:18.820 02:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.820 02:22:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.820 02:22:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.820 02:22:59 -- accel/accel.sh@21 -- # val= 00:06:18.820 02:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.820 02:22:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.820 02:22:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.820 02:22:59 -- accel/accel.sh@21 -- # val= 00:06:18.820 02:22:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.820 02:22:59 -- accel/accel.sh@20 -- # IFS=: 00:06:18.820 02:22:59 -- accel/accel.sh@20 -- # read -r var val 00:06:18.820 02:22:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.820 02:22:59 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:18.820 02:22:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.820 00:06:18.820 real 0m3.118s 00:06:18.820 user 0m2.643s 00:06:18.820 sys 0m0.272s 00:06:18.820 02:22:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.820 02:22:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.820 ************************************ 00:06:18.820 END TEST accel_dualcast 00:06:18.820 ************************************ 00:06:18.820 02:22:59 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:18.820 02:22:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.820 02:22:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.820 02:22:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.820 ************************************ 00:06:18.820 START TEST accel_compare 00:06:18.820 ************************************ 00:06:18.820 02:22:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:18.820 02:22:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.820 02:22:59 -- accel/accel.sh@17 -- # local accel_module 00:06:18.820 02:22:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:18.820 02:22:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:18.820 02:22:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.820 02:22:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.820 02:22:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.820 02:22:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.820 02:22:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.820 02:22:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.820 02:22:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.820 02:22:59 -- accel/accel.sh@42 -- # jq -r . 00:06:18.820 [2024-11-21 02:22:59.380479] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.820 [2024-11-21 02:22:59.380563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58847 ] 00:06:19.079 [2024-11-21 02:22:59.509384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.079 [2024-11-21 02:22:59.598231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.457 02:23:00 -- accel/accel.sh@18 -- # out=' 00:06:20.457 SPDK Configuration: 00:06:20.457 Core mask: 0x1 00:06:20.457 00:06:20.457 Accel Perf Configuration: 00:06:20.457 Workload Type: compare 00:06:20.457 Transfer size: 4096 bytes 00:06:20.457 Vector count 1 00:06:20.457 Module: software 00:06:20.457 Queue depth: 32 00:06:20.457 Allocate depth: 32 00:06:20.457 # threads/core: 1 00:06:20.457 Run time: 1 seconds 00:06:20.457 Verify: Yes 00:06:20.457 00:06:20.457 Running for 1 seconds... 00:06:20.457 00:06:20.457 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.457 ------------------------------------------------------------------------------------ 00:06:20.457 0,0 570944/s 2230 MiB/s 0 0 00:06:20.457 ==================================================================================== 00:06:20.457 Total 570944/s 2230 MiB/s 0 0' 00:06:20.457 02:23:00 -- accel/accel.sh@20 -- # IFS=: 00:06:20.457 02:23:00 -- accel/accel.sh@20 -- # read -r var val 00:06:20.457 02:23:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:20.457 02:23:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:20.457 02:23:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.457 02:23:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.457 02:23:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.457 02:23:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.457 02:23:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.457 02:23:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.457 02:23:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.457 02:23:00 -- accel/accel.sh@42 -- # jq -r . 00:06:20.457 [2024-11-21 02:23:00.953385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.457 [2024-11-21 02:23:00.953573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58861 ] 00:06:20.457 [2024-11-21 02:23:01.089842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.716 [2024-11-21 02:23:01.164987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=0x1 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=compare 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=software 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=32 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=32 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=1 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val=Yes 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:20.716 02:23:01 -- accel/accel.sh@21 -- # val= 00:06:20.716 02:23:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.716 02:23:01 -- accel/accel.sh@20 -- # IFS=: 00:06:20.717 02:23:01 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 02:23:02 -- accel/accel.sh@21 -- # val= 00:06:22.094 02:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 02:23:02 -- accel/accel.sh@21 -- # val= 00:06:22.094 02:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 02:23:02 -- accel/accel.sh@21 -- # val= 00:06:22.094 02:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 ************************************ 00:06:22.094 END TEST accel_compare 00:06:22.094 ************************************ 00:06:22.094 02:23:02 -- accel/accel.sh@21 -- # val= 00:06:22.094 02:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 02:23:02 -- accel/accel.sh@21 -- # val= 00:06:22.094 02:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 02:23:02 -- accel/accel.sh@21 -- # val= 00:06:22.094 02:23:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # IFS=: 00:06:22.094 02:23:02 -- accel/accel.sh@20 -- # read -r var val 00:06:22.094 02:23:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.094 02:23:02 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:22.094 02:23:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.094 00:06:22.094 real 0m3.118s 00:06:22.094 user 0m2.639s 00:06:22.094 sys 0m0.279s 00:06:22.094 02:23:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.094 02:23:02 -- common/autotest_common.sh@10 -- # set +x 00:06:22.094 02:23:02 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:22.094 02:23:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:22.094 02:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.094 02:23:02 -- common/autotest_common.sh@10 -- # set +x 00:06:22.094 ************************************ 00:06:22.094 START TEST accel_xor 00:06:22.094 ************************************ 00:06:22.094 02:23:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:22.094 02:23:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.094 02:23:02 -- accel/accel.sh@17 -- # local accel_module 00:06:22.094 02:23:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:22.094 02:23:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:22.094 02:23:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.094 02:23:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.094 02:23:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.094 02:23:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.094 02:23:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.094 02:23:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.094 02:23:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.094 02:23:02 -- accel/accel.sh@42 -- # jq -r . 00:06:22.094 [2024-11-21 02:23:02.557767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.094 [2024-11-21 02:23:02.557900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58901 ] 00:06:22.094 [2024-11-21 02:23:02.693817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.353 [2024-11-21 02:23:02.771443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.730 02:23:04 -- accel/accel.sh@18 -- # out=' 00:06:23.730 SPDK Configuration: 00:06:23.730 Core mask: 0x1 00:06:23.730 00:06:23.730 Accel Perf Configuration: 00:06:23.730 Workload Type: xor 00:06:23.730 Source buffers: 2 00:06:23.730 Transfer size: 4096 bytes 00:06:23.730 Vector count 1 00:06:23.730 Module: software 00:06:23.730 Queue depth: 32 00:06:23.730 Allocate depth: 32 00:06:23.730 # threads/core: 1 00:06:23.730 Run time: 1 seconds 00:06:23.730 Verify: Yes 00:06:23.730 00:06:23.730 Running for 1 seconds... 00:06:23.730 00:06:23.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.730 ------------------------------------------------------------------------------------ 00:06:23.730 0,0 262656/s 1026 MiB/s 0 0 00:06:23.730 ==================================================================================== 00:06:23.730 Total 262656/s 1026 MiB/s 0 0' 00:06:23.730 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.730 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.730 02:23:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:23.730 02:23:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:23.730 02:23:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.730 02:23:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.730 02:23:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.730 02:23:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.730 02:23:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.730 02:23:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.730 02:23:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.730 02:23:04 -- accel/accel.sh@42 -- # jq -r . 00:06:23.730 [2024-11-21 02:23:04.105832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.730 [2024-11-21 02:23:04.106166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58915 ] 00:06:23.730 [2024-11-21 02:23:04.242495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.730 [2024-11-21 02:23:04.320309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=0x1 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=xor 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=2 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=software 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=32 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=32 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=1 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val=Yes 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:23.990 02:23:04 -- accel/accel.sh@21 -- # val= 00:06:23.990 02:23:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # IFS=: 00:06:23.990 02:23:04 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@21 -- # val= 00:06:25.368 02:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@21 -- # val= 00:06:25.368 02:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@21 -- # val= 00:06:25.368 02:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@21 -- # val= 00:06:25.368 ************************************ 00:06:25.368 END TEST accel_xor 00:06:25.368 ************************************ 00:06:25.368 02:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@21 -- # val= 00:06:25.368 02:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@21 -- # val= 00:06:25.368 02:23:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # IFS=: 00:06:25.368 02:23:05 -- accel/accel.sh@20 -- # read -r var val 00:06:25.368 02:23:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.368 02:23:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:25.368 02:23:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.368 00:06:25.368 real 0m3.093s 00:06:25.368 user 0m2.622s 00:06:25.368 sys 0m0.264s 00:06:25.368 02:23:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.368 02:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.368 02:23:05 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:25.368 02:23:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:25.368 02:23:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.368 02:23:05 -- common/autotest_common.sh@10 -- # set +x 00:06:25.368 ************************************ 00:06:25.368 START TEST accel_xor 00:06:25.368 ************************************ 00:06:25.368 02:23:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:25.368 02:23:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.368 02:23:05 -- accel/accel.sh@17 -- # local accel_module 00:06:25.368 02:23:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:25.368 02:23:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:25.368 02:23:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.368 02:23:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.368 02:23:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.368 02:23:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.368 02:23:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.368 02:23:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.368 02:23:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.368 02:23:05 -- accel/accel.sh@42 -- # jq -r . 00:06:25.368 [2024-11-21 02:23:05.704908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.368 [2024-11-21 02:23:05.705343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:06:25.368 [2024-11-21 02:23:05.842436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.368 [2024-11-21 02:23:05.918363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.745 02:23:07 -- accel/accel.sh@18 -- # out=' 00:06:26.745 SPDK Configuration: 00:06:26.745 Core mask: 0x1 00:06:26.745 00:06:26.745 Accel Perf Configuration: 00:06:26.745 Workload Type: xor 00:06:26.745 Source buffers: 3 00:06:26.745 Transfer size: 4096 bytes 00:06:26.745 Vector count 1 00:06:26.745 Module: software 00:06:26.745 Queue depth: 32 00:06:26.745 Allocate depth: 32 00:06:26.745 # threads/core: 1 00:06:26.745 Run time: 1 seconds 00:06:26.745 Verify: Yes 00:06:26.745 00:06:26.745 Running for 1 seconds... 00:06:26.745 00:06:26.745 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.745 ------------------------------------------------------------------------------------ 00:06:26.745 0,0 259296/s 1012 MiB/s 0 0 00:06:26.745 ==================================================================================== 00:06:26.745 Total 259296/s 1012 MiB/s 0 0' 00:06:26.745 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:26.745 02:23:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:26.745 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:26.745 02:23:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:26.745 02:23:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.745 02:23:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.745 02:23:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.745 02:23:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.745 02:23:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.745 02:23:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.745 02:23:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.745 02:23:07 -- accel/accel.sh@42 -- # jq -r . 00:06:26.745 [2024-11-21 02:23:07.238226] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.745 [2024-11-21 02:23:07.238567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58969 ] 00:06:26.745 [2024-11-21 02:23:07.371681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.003 [2024-11-21 02:23:07.453078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.003 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.003 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.003 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.003 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.003 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.003 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.003 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.003 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.003 02:23:07 -- accel/accel.sh@21 -- # val=0x1 00:06:27.003 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=xor 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=3 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=software 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=32 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=32 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=1 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val=Yes 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:27.004 02:23:07 -- accel/accel.sh@21 -- # val= 00:06:27.004 02:23:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # IFS=: 00:06:27.004 02:23:07 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@21 -- # val= 00:06:28.409 02:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # IFS=: 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@21 -- # val= 00:06:28.409 02:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # IFS=: 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@21 -- # val= 00:06:28.409 02:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # IFS=: 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@21 -- # val= 00:06:28.409 02:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # IFS=: 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@21 -- # val= 00:06:28.409 02:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # IFS=: 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@21 -- # val= 00:06:28.409 02:23:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # IFS=: 00:06:28.409 02:23:08 -- accel/accel.sh@20 -- # read -r var val 00:06:28.409 02:23:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.409 02:23:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:28.409 02:23:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.409 00:06:28.409 real 0m3.073s 00:06:28.409 user 0m2.607s 00:06:28.409 sys 0m0.261s 00:06:28.409 02:23:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.409 02:23:08 -- common/autotest_common.sh@10 -- # set +x 00:06:28.409 ************************************ 00:06:28.409 END TEST accel_xor 00:06:28.409 ************************************ 00:06:28.409 02:23:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:28.409 02:23:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:28.409 02:23:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.409 02:23:08 -- common/autotest_common.sh@10 -- # set +x 00:06:28.409 ************************************ 00:06:28.409 START TEST accel_dif_verify 00:06:28.409 ************************************ 00:06:28.409 02:23:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:28.409 02:23:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.409 02:23:08 -- accel/accel.sh@17 -- # local accel_module 00:06:28.409 02:23:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:28.409 02:23:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:28.409 02:23:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.409 02:23:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.409 02:23:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.409 02:23:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.409 02:23:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.409 02:23:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.409 02:23:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.409 02:23:08 -- accel/accel.sh@42 -- # jq -r . 00:06:28.409 [2024-11-21 02:23:08.834983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.409 [2024-11-21 02:23:08.835728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59009 ] 00:06:28.409 [2024-11-21 02:23:08.971139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.409 [2024-11-21 02:23:09.046966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.802 02:23:10 -- accel/accel.sh@18 -- # out=' 00:06:29.802 SPDK Configuration: 00:06:29.802 Core mask: 0x1 00:06:29.802 00:06:29.803 Accel Perf Configuration: 00:06:29.803 Workload Type: dif_verify 00:06:29.803 Vector size: 4096 bytes 00:06:29.803 Transfer size: 4096 bytes 00:06:29.803 Block size: 512 bytes 00:06:29.803 Metadata size: 8 bytes 00:06:29.803 Vector count 1 00:06:29.803 Module: software 00:06:29.803 Queue depth: 32 00:06:29.803 Allocate depth: 32 00:06:29.803 # threads/core: 1 00:06:29.803 Run time: 1 seconds 00:06:29.803 Verify: No 00:06:29.803 00:06:29.803 Running for 1 seconds... 00:06:29.803 00:06:29.803 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.803 ------------------------------------------------------------------------------------ 00:06:29.803 0,0 127232/s 504 MiB/s 0 0 00:06:29.803 ==================================================================================== 00:06:29.803 Total 127232/s 497 MiB/s 0 0' 00:06:29.803 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:29.803 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:29.803 02:23:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:29.803 02:23:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:29.803 02:23:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.803 02:23:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.803 02:23:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.803 02:23:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.803 02:23:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.803 02:23:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.803 02:23:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.803 02:23:10 -- accel/accel.sh@42 -- # jq -r . 00:06:29.803 [2024-11-21 02:23:10.365734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.803 [2024-11-21 02:23:10.365841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59023 ] 00:06:30.062 [2024-11-21 02:23:10.502080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.062 [2024-11-21 02:23:10.578190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=0x1 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=dif_verify 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=software 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=32 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=32 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=1 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val=No 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:30.062 02:23:10 -- accel/accel.sh@21 -- # val= 00:06:30.062 02:23:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # IFS=: 00:06:30.062 02:23:10 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@21 -- # val= 00:06:31.439 02:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # IFS=: 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@21 -- # val= 00:06:31.439 02:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # IFS=: 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@21 -- # val= 00:06:31.439 02:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # IFS=: 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@21 -- # val= 00:06:31.439 02:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # IFS=: 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@21 -- # val= 00:06:31.439 02:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # IFS=: 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@21 -- # val= 00:06:31.439 02:23:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # IFS=: 00:06:31.439 02:23:11 -- accel/accel.sh@20 -- # read -r var val 00:06:31.439 02:23:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.439 02:23:11 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:31.439 02:23:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.439 00:06:31.439 real 0m3.063s 00:06:31.439 user 0m2.598s 00:06:31.439 sys 0m0.264s 00:06:31.439 02:23:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.439 02:23:11 -- common/autotest_common.sh@10 -- # set +x 00:06:31.439 ************************************ 00:06:31.439 END TEST accel_dif_verify 00:06:31.439 ************************************ 00:06:31.439 02:23:11 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:31.439 02:23:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:31.439 02:23:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.439 02:23:11 -- common/autotest_common.sh@10 -- # set +x 00:06:31.439 ************************************ 00:06:31.439 START TEST accel_dif_generate 00:06:31.439 ************************************ 00:06:31.439 02:23:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:31.439 02:23:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.439 02:23:11 -- accel/accel.sh@17 -- # local accel_module 00:06:31.439 02:23:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:31.439 02:23:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:31.439 02:23:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.439 02:23:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.439 02:23:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.439 02:23:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.439 02:23:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.439 02:23:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.439 02:23:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.440 02:23:11 -- accel/accel.sh@42 -- # jq -r . 00:06:31.440 [2024-11-21 02:23:11.956875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.440 [2024-11-21 02:23:11.957887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59063 ] 00:06:31.698 [2024-11-21 02:23:12.094064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.698 [2024-11-21 02:23:12.169834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.077 02:23:13 -- accel/accel.sh@18 -- # out=' 00:06:33.077 SPDK Configuration: 00:06:33.077 Core mask: 0x1 00:06:33.077 00:06:33.077 Accel Perf Configuration: 00:06:33.077 Workload Type: dif_generate 00:06:33.077 Vector size: 4096 bytes 00:06:33.077 Transfer size: 4096 bytes 00:06:33.077 Block size: 512 bytes 00:06:33.077 Metadata size: 8 bytes 00:06:33.077 Vector count 1 00:06:33.077 Module: software 00:06:33.077 Queue depth: 32 00:06:33.077 Allocate depth: 32 00:06:33.077 # threads/core: 1 00:06:33.077 Run time: 1 seconds 00:06:33.077 Verify: No 00:06:33.077 00:06:33.077 Running for 1 seconds... 00:06:33.077 00:06:33.077 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.077 ------------------------------------------------------------------------------------ 00:06:33.077 0,0 154048/s 611 MiB/s 0 0 00:06:33.077 ==================================================================================== 00:06:33.077 Total 154048/s 601 MiB/s 0 0' 00:06:33.077 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.077 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.077 02:23:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:33.077 02:23:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:33.077 02:23:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.077 02:23:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.077 02:23:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.077 02:23:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.077 02:23:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.077 02:23:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.077 02:23:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.077 02:23:13 -- accel/accel.sh@42 -- # jq -r . 00:06:33.077 [2024-11-21 02:23:13.491029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.077 [2024-11-21 02:23:13.491149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:06:33.077 [2024-11-21 02:23:13.626396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.077 [2024-11-21 02:23:13.706141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=0x1 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=dif_generate 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=software 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=32 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=32 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=1 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val=No 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:33.336 02:23:13 -- accel/accel.sh@21 -- # val= 00:06:33.336 02:23:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # IFS=: 00:06:33.336 02:23:13 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@21 -- # val= 00:06:34.715 02:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # IFS=: 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@21 -- # val= 00:06:34.715 02:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # IFS=: 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@21 -- # val= 00:06:34.715 02:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # IFS=: 00:06:34.715 ************************************ 00:06:34.715 END TEST accel_dif_generate 00:06:34.715 ************************************ 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@21 -- # val= 00:06:34.715 02:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # IFS=: 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@21 -- # val= 00:06:34.715 02:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # IFS=: 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@21 -- # val= 00:06:34.715 02:23:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # IFS=: 00:06:34.715 02:23:14 -- accel/accel.sh@20 -- # read -r var val 00:06:34.715 02:23:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.715 02:23:14 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:34.715 02:23:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.715 00:06:34.715 real 0m3.070s 00:06:34.715 user 0m2.609s 00:06:34.715 sys 0m0.258s 00:06:34.715 02:23:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:34.715 02:23:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.715 02:23:15 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:34.715 02:23:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:34.715 02:23:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.715 02:23:15 -- common/autotest_common.sh@10 -- # set +x 00:06:34.715 ************************************ 00:06:34.715 START TEST accel_dif_generate_copy 00:06:34.715 ************************************ 00:06:34.715 02:23:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:34.715 02:23:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.715 02:23:15 -- accel/accel.sh@17 -- # local accel_module 00:06:34.715 02:23:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:34.715 02:23:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:34.715 02:23:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.715 02:23:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.715 02:23:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.715 02:23:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.715 02:23:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.715 02:23:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.715 02:23:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.715 02:23:15 -- accel/accel.sh@42 -- # jq -r . 00:06:34.715 [2024-11-21 02:23:15.085219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.715 [2024-11-21 02:23:15.085317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:06:34.715 [2024-11-21 02:23:15.222080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.715 [2024-11-21 02:23:15.299284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.094 02:23:16 -- accel/accel.sh@18 -- # out=' 00:06:36.094 SPDK Configuration: 00:06:36.094 Core mask: 0x1 00:06:36.094 00:06:36.094 Accel Perf Configuration: 00:06:36.094 Workload Type: dif_generate_copy 00:06:36.094 Vector size: 4096 bytes 00:06:36.094 Transfer size: 4096 bytes 00:06:36.094 Vector count 1 00:06:36.094 Module: software 00:06:36.094 Queue depth: 32 00:06:36.094 Allocate depth: 32 00:06:36.094 # threads/core: 1 00:06:36.094 Run time: 1 seconds 00:06:36.094 Verify: No 00:06:36.094 00:06:36.094 Running for 1 seconds... 00:06:36.094 00:06:36.094 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.094 ------------------------------------------------------------------------------------ 00:06:36.094 0,0 117600/s 466 MiB/s 0 0 00:06:36.094 ==================================================================================== 00:06:36.094 Total 117600/s 459 MiB/s 0 0' 00:06:36.094 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.094 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.094 02:23:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:36.094 02:23:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.094 02:23:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:36.094 02:23:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.094 02:23:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.094 02:23:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.094 02:23:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.094 02:23:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.094 02:23:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.094 02:23:16 -- accel/accel.sh@42 -- # jq -r . 00:06:36.094 [2024-11-21 02:23:16.620678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.094 [2024-11-21 02:23:16.620797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:06:36.353 [2024-11-21 02:23:16.749040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.353 [2024-11-21 02:23:16.828573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=0x1 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=software 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=32 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=32 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=1 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val=No 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:36.353 02:23:16 -- accel/accel.sh@21 -- # val= 00:06:36.353 02:23:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # IFS=: 00:06:36.353 02:23:16 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@21 -- # val= 00:06:37.730 02:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@21 -- # val= 00:06:37.730 02:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@21 -- # val= 00:06:37.730 02:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@21 -- # val= 00:06:37.730 02:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@21 -- # val= 00:06:37.730 02:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@21 -- # val= 00:06:37.730 ************************************ 00:06:37.730 END TEST accel_dif_generate_copy 00:06:37.730 ************************************ 00:06:37.730 02:23:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # IFS=: 00:06:37.730 02:23:18 -- accel/accel.sh@20 -- # read -r var val 00:06:37.730 02:23:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.730 02:23:18 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:37.730 02:23:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.730 00:06:37.730 real 0m3.067s 00:06:37.730 user 0m2.611s 00:06:37.730 sys 0m0.252s 00:06:37.730 02:23:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.730 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:37.730 02:23:18 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:37.730 02:23:18 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.730 02:23:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:37.730 02:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.730 02:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:37.730 ************************************ 00:06:37.730 START TEST accel_comp 00:06:37.730 ************************************ 00:06:37.730 02:23:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.730 02:23:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.730 02:23:18 -- accel/accel.sh@17 -- # local accel_module 00:06:37.730 02:23:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.730 02:23:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:37.730 02:23:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.730 02:23:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.730 02:23:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.730 02:23:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.730 02:23:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.730 02:23:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.730 02:23:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.730 02:23:18 -- accel/accel.sh@42 -- # jq -r . 00:06:37.730 [2024-11-21 02:23:18.208292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.730 [2024-11-21 02:23:18.208372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59174 ] 00:06:37.730 [2024-11-21 02:23:18.337669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.989 [2024-11-21 02:23:18.417183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.367 02:23:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:39.367 00:06:39.367 SPDK Configuration: 00:06:39.367 Core mask: 0x1 00:06:39.367 00:06:39.367 Accel Perf Configuration: 00:06:39.367 Workload Type: compress 00:06:39.367 Transfer size: 4096 bytes 00:06:39.367 Vector count 1 00:06:39.367 Module: software 00:06:39.367 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.367 Queue depth: 32 00:06:39.367 Allocate depth: 32 00:06:39.367 # threads/core: 1 00:06:39.367 Run time: 1 seconds 00:06:39.367 Verify: No 00:06:39.367 00:06:39.367 Running for 1 seconds... 00:06:39.367 00:06:39.368 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.368 ------------------------------------------------------------------------------------ 00:06:39.368 0,0 60352/s 251 MiB/s 0 0 00:06:39.368 ==================================================================================== 00:06:39.368 Total 60352/s 235 MiB/s 0 0' 00:06:39.368 02:23:19 -- accel/accel.sh@20 -- # IFS=: 00:06:39.368 02:23:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.368 02:23:19 -- accel/accel.sh@20 -- # read -r var val 00:06:39.368 02:23:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.368 02:23:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.368 02:23:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.368 02:23:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.368 02:23:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.368 02:23:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.368 02:23:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.368 02:23:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.368 02:23:19 -- accel/accel.sh@42 -- # jq -r . 00:06:39.368 [2024-11-21 02:23:19.742316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.368 [2024-11-21 02:23:19.742437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59194 ] 00:06:39.368 [2024-11-21 02:23:19.878749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.368 [2024-11-21 02:23:19.958570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=0x1 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=compress 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=software 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=32 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=32 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=1 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val=No 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:39.627 02:23:20 -- accel/accel.sh@21 -- # val= 00:06:39.627 02:23:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # IFS=: 00:06:39.627 02:23:20 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@21 -- # val= 00:06:41.006 02:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@21 -- # val= 00:06:41.006 02:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@21 -- # val= 00:06:41.006 02:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@21 -- # val= 00:06:41.006 02:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@21 -- # val= 00:06:41.006 02:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@21 -- # val= 00:06:41.006 02:23:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # IFS=: 00:06:41.006 02:23:21 -- accel/accel.sh@20 -- # read -r var val 00:06:41.006 02:23:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.006 02:23:21 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:41.006 02:23:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.006 00:06:41.006 real 0m3.073s 00:06:41.006 user 0m2.608s 00:06:41.006 sys 0m0.264s 00:06:41.006 02:23:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.006 ************************************ 00:06:41.006 END TEST accel_comp 00:06:41.006 ************************************ 00:06:41.006 02:23:21 -- common/autotest_common.sh@10 -- # set +x 00:06:41.006 02:23:21 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.006 02:23:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:41.006 02:23:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.006 02:23:21 -- common/autotest_common.sh@10 -- # set +x 00:06:41.006 ************************************ 00:06:41.006 START TEST accel_decomp 00:06:41.006 ************************************ 00:06:41.006 02:23:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.006 02:23:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.006 02:23:21 -- accel/accel.sh@17 -- # local accel_module 00:06:41.006 02:23:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.006 02:23:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:41.006 02:23:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.006 02:23:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.006 02:23:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.006 02:23:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.006 02:23:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.006 02:23:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.006 02:23:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.006 02:23:21 -- accel/accel.sh@42 -- # jq -r . 00:06:41.006 [2024-11-21 02:23:21.341894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.006 [2024-11-21 02:23:21.342338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59228 ] 00:06:41.006 [2024-11-21 02:23:21.476957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.006 [2024-11-21 02:23:21.551166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.385 02:23:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.385 00:06:42.385 SPDK Configuration: 00:06:42.385 Core mask: 0x1 00:06:42.385 00:06:42.385 Accel Perf Configuration: 00:06:42.385 Workload Type: decompress 00:06:42.385 Transfer size: 4096 bytes 00:06:42.385 Vector count 1 00:06:42.385 Module: software 00:06:42.385 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.385 Queue depth: 32 00:06:42.385 Allocate depth: 32 00:06:42.385 # threads/core: 1 00:06:42.385 Run time: 1 seconds 00:06:42.385 Verify: Yes 00:06:42.385 00:06:42.385 Running for 1 seconds... 00:06:42.385 00:06:42.385 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.385 ------------------------------------------------------------------------------------ 00:06:42.385 0,0 86176/s 158 MiB/s 0 0 00:06:42.385 ==================================================================================== 00:06:42.385 Total 86176/s 336 MiB/s 0 0' 00:06:42.385 02:23:22 -- accel/accel.sh@20 -- # IFS=: 00:06:42.385 02:23:22 -- accel/accel.sh@20 -- # read -r var val 00:06:42.385 02:23:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.385 02:23:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:42.385 02:23:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.385 02:23:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.385 02:23:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.385 02:23:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.385 02:23:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.385 02:23:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.385 02:23:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.385 02:23:22 -- accel/accel.sh@42 -- # jq -r . 00:06:42.385 [2024-11-21 02:23:22.872423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.385 [2024-11-21 02:23:22.872715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59248 ] 00:06:42.385 [2024-11-21 02:23:23.006841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.645 [2024-11-21 02:23:23.079087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=0x1 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=decompress 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=software 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=32 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=32 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=1 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val=Yes 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:42.645 02:23:23 -- accel/accel.sh@21 -- # val= 00:06:42.645 02:23:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # IFS=: 00:06:42.645 02:23:23 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@21 -- # val= 00:06:44.024 02:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@21 -- # val= 00:06:44.024 02:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@21 -- # val= 00:06:44.024 02:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@21 -- # val= 00:06:44.024 02:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@21 -- # val= 00:06:44.024 02:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@21 -- # val= 00:06:44.024 02:23:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # IFS=: 00:06:44.024 02:23:24 -- accel/accel.sh@20 -- # read -r var val 00:06:44.024 02:23:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.024 02:23:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:44.024 02:23:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.024 00:06:44.024 real 0m3.062s 00:06:44.024 user 0m2.604s 00:06:44.024 sys 0m0.253s 00:06:44.024 02:23:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.024 ************************************ 00:06:44.024 END TEST accel_decomp 00:06:44.024 ************************************ 00:06:44.024 02:23:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.024 02:23:24 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.024 02:23:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:44.024 02:23:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.024 02:23:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.024 ************************************ 00:06:44.024 START TEST accel_decmop_full 00:06:44.024 ************************************ 00:06:44.024 02:23:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.024 02:23:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.024 02:23:24 -- accel/accel.sh@17 -- # local accel_module 00:06:44.024 02:23:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.024 02:23:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:44.024 02:23:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.024 02:23:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.024 02:23:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.024 02:23:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.024 02:23:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.024 02:23:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.024 02:23:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.024 02:23:24 -- accel/accel.sh@42 -- # jq -r . 00:06:44.024 [2024-11-21 02:23:24.459645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.024 [2024-11-21 02:23:24.459757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:06:44.024 [2024-11-21 02:23:24.587648] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.024 [2024-11-21 02:23:24.660736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.403 02:23:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:45.403 00:06:45.403 SPDK Configuration: 00:06:45.403 Core mask: 0x1 00:06:45.403 00:06:45.403 Accel Perf Configuration: 00:06:45.403 Workload Type: decompress 00:06:45.403 Transfer size: 111250 bytes 00:06:45.403 Vector count 1 00:06:45.403 Module: software 00:06:45.403 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.403 Queue depth: 32 00:06:45.403 Allocate depth: 32 00:06:45.403 # threads/core: 1 00:06:45.403 Run time: 1 seconds 00:06:45.403 Verify: Yes 00:06:45.403 00:06:45.403 Running for 1 seconds... 00:06:45.403 00:06:45.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.403 ------------------------------------------------------------------------------------ 00:06:45.403 0,0 5728/s 236 MiB/s 0 0 00:06:45.403 ==================================================================================== 00:06:45.403 Total 5728/s 607 MiB/s 0 0' 00:06:45.403 02:23:25 -- accel/accel.sh@20 -- # IFS=: 00:06:45.403 02:23:25 -- accel/accel.sh@20 -- # read -r var val 00:06:45.403 02:23:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.403 02:23:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:45.403 02:23:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.403 02:23:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.403 02:23:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.403 02:23:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.403 02:23:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.403 02:23:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.403 02:23:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.403 02:23:25 -- accel/accel.sh@42 -- # jq -r . 00:06:45.403 [2024-11-21 02:23:25.991982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.403 [2024-11-21 02:23:25.992077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:06:45.663 [2024-11-21 02:23:26.129730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.663 [2024-11-21 02:23:26.206496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=0x1 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=decompress 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=software 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=32 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=32 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=1 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val=Yes 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:45.663 02:23:26 -- accel/accel.sh@21 -- # val= 00:06:45.663 02:23:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # IFS=: 00:06:45.663 02:23:26 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@21 -- # val= 00:06:47.042 02:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@21 -- # val= 00:06:47.042 02:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@21 -- # val= 00:06:47.042 02:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@21 -- # val= 00:06:47.042 02:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@21 -- # val= 00:06:47.042 02:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@21 -- # val= 00:06:47.042 02:23:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # IFS=: 00:06:47.042 02:23:27 -- accel/accel.sh@20 -- # read -r var val 00:06:47.042 02:23:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.042 02:23:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:47.042 ************************************ 00:06:47.042 END TEST accel_decmop_full 00:06:47.042 ************************************ 00:06:47.042 02:23:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.042 00:06:47.042 real 0m3.078s 00:06:47.042 user 0m2.607s 00:06:47.042 sys 0m0.268s 00:06:47.042 02:23:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.042 02:23:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.042 02:23:27 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.042 02:23:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:47.042 02:23:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.042 02:23:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.042 ************************************ 00:06:47.042 START TEST accel_decomp_mcore 00:06:47.042 ************************************ 00:06:47.042 02:23:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.042 02:23:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.042 02:23:27 -- accel/accel.sh@17 -- # local accel_module 00:06:47.042 02:23:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.042 02:23:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:47.042 02:23:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.042 02:23:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.042 02:23:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.042 02:23:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.042 02:23:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.042 02:23:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.042 02:23:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.042 02:23:27 -- accel/accel.sh@42 -- # jq -r . 00:06:47.042 [2024-11-21 02:23:27.591071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.042 [2024-11-21 02:23:27.591276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59338 ] 00:06:47.301 [2024-11-21 02:23:27.718766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.301 [2024-11-21 02:23:27.799794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.301 [2024-11-21 02:23:27.799893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.301 [2024-11-21 02:23:27.800029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.301 [2024-11-21 02:23:27.800032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.678 02:23:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.678 00:06:48.678 SPDK Configuration: 00:06:48.678 Core mask: 0xf 00:06:48.678 00:06:48.678 Accel Perf Configuration: 00:06:48.678 Workload Type: decompress 00:06:48.678 Transfer size: 4096 bytes 00:06:48.678 Vector count 1 00:06:48.678 Module: software 00:06:48.678 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.678 Queue depth: 32 00:06:48.678 Allocate depth: 32 00:06:48.678 # threads/core: 1 00:06:48.678 Run time: 1 seconds 00:06:48.678 Verify: Yes 00:06:48.678 00:06:48.678 Running for 1 seconds... 00:06:48.678 00:06:48.678 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.678 ------------------------------------------------------------------------------------ 00:06:48.678 0,0 59584/s 109 MiB/s 0 0 00:06:48.678 3,0 52512/s 96 MiB/s 0 0 00:06:48.678 2,0 53088/s 97 MiB/s 0 0 00:06:48.678 1,0 51776/s 95 MiB/s 0 0 00:06:48.678 ==================================================================================== 00:06:48.678 Total 216960/s 847 MiB/s 0 0' 00:06:48.678 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.678 02:23:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:48.678 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.678 02:23:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:48.678 02:23:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.678 02:23:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.678 02:23:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.678 02:23:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.678 02:23:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.678 02:23:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.678 02:23:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.678 02:23:29 -- accel/accel.sh@42 -- # jq -r . 00:06:48.678 [2024-11-21 02:23:29.137778] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.678 [2024-11-21 02:23:29.137884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:06:48.678 [2024-11-21 02:23:29.275024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.939 [2024-11-21 02:23:29.355820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.939 [2024-11-21 02:23:29.355893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.939 [2024-11-21 02:23:29.356011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.939 [2024-11-21 02:23:29.356019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=0xf 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=decompress 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=software 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=32 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=32 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=1 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val=Yes 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:48.939 02:23:29 -- accel/accel.sh@21 -- # val= 00:06:48.939 02:23:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # IFS=: 00:06:48.939 02:23:29 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@21 -- # val= 00:06:50.345 02:23:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # IFS=: 00:06:50.345 02:23:30 -- accel/accel.sh@20 -- # read -r var val 00:06:50.345 02:23:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.345 02:23:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:50.345 02:23:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.345 00:06:50.345 real 0m3.106s 00:06:50.345 user 0m9.679s 00:06:50.345 sys 0m0.286s 00:06:50.345 02:23:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.345 02:23:30 -- common/autotest_common.sh@10 -- # set +x 00:06:50.345 ************************************ 00:06:50.345 END TEST accel_decomp_mcore 00:06:50.345 ************************************ 00:06:50.346 02:23:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.346 02:23:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:50.346 02:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.346 02:23:30 -- common/autotest_common.sh@10 -- # set +x 00:06:50.346 ************************************ 00:06:50.346 START TEST accel_decomp_full_mcore 00:06:50.346 ************************************ 00:06:50.346 02:23:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.346 02:23:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.346 02:23:30 -- accel/accel.sh@17 -- # local accel_module 00:06:50.346 02:23:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.346 02:23:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:50.346 02:23:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.346 02:23:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.346 02:23:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.346 02:23:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.346 02:23:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.346 02:23:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.346 02:23:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.346 02:23:30 -- accel/accel.sh@42 -- # jq -r . 00:06:50.346 [2024-11-21 02:23:30.746822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.346 [2024-11-21 02:23:30.747051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59400 ] 00:06:50.346 [2024-11-21 02:23:30.879321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.346 [2024-11-21 02:23:30.957298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.346 [2024-11-21 02:23:30.957420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.346 [2024-11-21 02:23:30.957544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.346 [2024-11-21 02:23:30.957554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.725 02:23:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:51.725 00:06:51.725 SPDK Configuration: 00:06:51.725 Core mask: 0xf 00:06:51.725 00:06:51.725 Accel Perf Configuration: 00:06:51.725 Workload Type: decompress 00:06:51.725 Transfer size: 111250 bytes 00:06:51.725 Vector count 1 00:06:51.725 Module: software 00:06:51.725 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.725 Queue depth: 32 00:06:51.725 Allocate depth: 32 00:06:51.725 # threads/core: 1 00:06:51.725 Run time: 1 seconds 00:06:51.725 Verify: Yes 00:06:51.725 00:06:51.725 Running for 1 seconds... 00:06:51.725 00:06:51.725 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.725 ------------------------------------------------------------------------------------ 00:06:51.725 0,0 5536/s 228 MiB/s 0 0 00:06:51.725 3,0 5056/s 208 MiB/s 0 0 00:06:51.725 2,0 5120/s 211 MiB/s 0 0 00:06:51.725 1,0 5536/s 228 MiB/s 0 0 00:06:51.725 ==================================================================================== 00:06:51.725 Total 21248/s 2254 MiB/s 0 0' 00:06:51.725 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.725 02:23:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.725 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.725 02:23:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.725 02:23:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.725 02:23:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.725 02:23:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.725 02:23:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.725 02:23:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.725 02:23:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.725 02:23:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.725 02:23:32 -- accel/accel.sh@42 -- # jq -r . 00:06:51.725 [2024-11-21 02:23:32.297890] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.725 [2024-11-21 02:23:32.298560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59423 ] 00:06:51.984 [2024-11-21 02:23:32.435230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.984 [2024-11-21 02:23:32.510430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.984 [2024-11-21 02:23:32.510547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.984 [2024-11-21 02:23:32.510648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.984 [2024-11-21 02:23:32.510651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val=0xf 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val=decompress 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.984 02:23:32 -- accel/accel.sh@21 -- # val=software 00:06:51.984 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.984 02:23:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.984 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val=32 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val=32 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val=1 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val=Yes 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:51.985 02:23:32 -- accel/accel.sh@21 -- # val= 00:06:51.985 02:23:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # IFS=: 00:06:51.985 02:23:32 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.362 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.362 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.362 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.362 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.362 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.362 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.363 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.363 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.363 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.363 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.363 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.363 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.363 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.363 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.363 02:23:33 -- accel/accel.sh@21 -- # val= 00:06:53.363 02:23:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # IFS=: 00:06:53.363 02:23:33 -- accel/accel.sh@20 -- # read -r var val 00:06:53.363 02:23:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.363 02:23:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:53.363 02:23:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.363 00:06:53.363 real 0m3.110s 00:06:53.363 user 0m9.737s 00:06:53.363 sys 0m0.282s 00:06:53.363 02:23:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.363 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:53.363 ************************************ 00:06:53.363 END TEST accel_decomp_full_mcore 00:06:53.363 ************************************ 00:06:53.363 02:23:33 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.363 02:23:33 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:53.363 02:23:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.363 02:23:33 -- common/autotest_common.sh@10 -- # set +x 00:06:53.363 ************************************ 00:06:53.363 START TEST accel_decomp_mthread 00:06:53.363 ************************************ 00:06:53.363 02:23:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.363 02:23:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.363 02:23:33 -- accel/accel.sh@17 -- # local accel_module 00:06:53.363 02:23:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.363 02:23:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.363 02:23:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:53.363 02:23:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.363 02:23:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.363 02:23:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.363 02:23:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.363 02:23:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.363 02:23:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.363 02:23:33 -- accel/accel.sh@42 -- # jq -r . 00:06:53.363 [2024-11-21 02:23:33.915872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.363 [2024-11-21 02:23:33.915958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:06:53.622 [2024-11-21 02:23:34.049534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.622 [2024-11-21 02:23:34.124393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.000 02:23:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:55.000 00:06:55.000 SPDK Configuration: 00:06:55.000 Core mask: 0x1 00:06:55.000 00:06:55.000 Accel Perf Configuration: 00:06:55.000 Workload Type: decompress 00:06:55.000 Transfer size: 4096 bytes 00:06:55.000 Vector count 1 00:06:55.000 Module: software 00:06:55.000 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.000 Queue depth: 32 00:06:55.000 Allocate depth: 32 00:06:55.000 # threads/core: 2 00:06:55.000 Run time: 1 seconds 00:06:55.000 Verify: Yes 00:06:55.000 00:06:55.000 Running for 1 seconds... 00:06:55.000 00:06:55.000 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.000 ------------------------------------------------------------------------------------ 00:06:55.000 0,1 43680/s 80 MiB/s 0 0 00:06:55.000 0,0 43520/s 80 MiB/s 0 0 00:06:55.000 ==================================================================================== 00:06:55.000 Total 87200/s 340 MiB/s 0 0' 00:06:55.000 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.000 02:23:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:55.000 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.000 02:23:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:55.000 02:23:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.000 02:23:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.000 02:23:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.000 02:23:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.000 02:23:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.000 02:23:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.000 02:23:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.000 02:23:35 -- accel/accel.sh@42 -- # jq -r . 00:06:55.000 [2024-11-21 02:23:35.452841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.000 [2024-11-21 02:23:35.453250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59480 ] 00:06:55.000 [2024-11-21 02:23:35.590414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.259 [2024-11-21 02:23:35.667187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val=0x1 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val=decompress 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val=software 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val=32 00:06:55.259 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.259 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.259 02:23:35 -- accel/accel.sh@21 -- # val=32 00:06:55.260 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.260 02:23:35 -- accel/accel.sh@21 -- # val=2 00:06:55.260 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.260 02:23:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.260 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.260 02:23:35 -- accel/accel.sh@21 -- # val=Yes 00:06:55.260 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.260 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.260 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:55.260 02:23:35 -- accel/accel.sh@21 -- # val= 00:06:55.260 02:23:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # IFS=: 00:06:55.260 02:23:35 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@21 -- # val= 00:06:56.636 02:23:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # IFS=: 00:06:56.636 02:23:36 -- accel/accel.sh@20 -- # read -r var val 00:06:56.636 02:23:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.636 02:23:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:56.636 ************************************ 00:06:56.636 END TEST accel_decomp_mthread 00:06:56.636 ************************************ 00:06:56.636 02:23:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.636 00:06:56.636 real 0m3.088s 00:06:56.636 user 0m2.623s 00:06:56.636 sys 0m0.262s 00:06:56.636 02:23:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.636 02:23:36 -- common/autotest_common.sh@10 -- # set +x 00:06:56.636 02:23:37 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.636 02:23:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:56.636 02:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.636 02:23:37 -- common/autotest_common.sh@10 -- # set +x 00:06:56.636 ************************************ 00:06:56.636 START TEST accel_deomp_full_mthread 00:06:56.636 ************************************ 00:06:56.636 02:23:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.636 02:23:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.636 02:23:37 -- accel/accel.sh@17 -- # local accel_module 00:06:56.636 02:23:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.636 02:23:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.636 02:23:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.636 02:23:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.636 02:23:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.636 02:23:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.636 02:23:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.636 02:23:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.636 02:23:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.636 02:23:37 -- accel/accel.sh@42 -- # jq -r . 00:06:56.636 [2024-11-21 02:23:37.054629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.636 [2024-11-21 02:23:37.054981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59520 ] 00:06:56.636 [2024-11-21 02:23:37.182795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.636 [2024-11-21 02:23:37.257897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.013 02:23:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.013 00:06:58.013 SPDK Configuration: 00:06:58.013 Core mask: 0x1 00:06:58.013 00:06:58.013 Accel Perf Configuration: 00:06:58.013 Workload Type: decompress 00:06:58.013 Transfer size: 111250 bytes 00:06:58.013 Vector count 1 00:06:58.013 Module: software 00:06:58.013 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.013 Queue depth: 32 00:06:58.013 Allocate depth: 32 00:06:58.013 # threads/core: 2 00:06:58.013 Run time: 1 seconds 00:06:58.013 Verify: Yes 00:06:58.013 00:06:58.013 Running for 1 seconds... 00:06:58.013 00:06:58.013 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.013 ------------------------------------------------------------------------------------ 00:06:58.013 0,1 2912/s 120 MiB/s 0 0 00:06:58.013 0,0 2880/s 118 MiB/s 0 0 00:06:58.013 ==================================================================================== 00:06:58.013 Total 5792/s 614 MiB/s 0 0' 00:06:58.013 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.013 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.013 02:23:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.013 02:23:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.013 02:23:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.013 02:23:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.013 02:23:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.013 02:23:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.013 02:23:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.013 02:23:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.013 02:23:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.013 02:23:38 -- accel/accel.sh@42 -- # jq -r . 00:06:58.013 [2024-11-21 02:23:38.607327] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.013 [2024-11-21 02:23:38.607435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ] 00:06:58.273 [2024-11-21 02:23:38.742755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.273 [2024-11-21 02:23:38.814029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=0x1 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=decompress 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=software 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=32 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=32 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=2 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val=Yes 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:58.273 02:23:38 -- accel/accel.sh@21 -- # val= 00:06:58.273 02:23:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # IFS=: 00:06:58.273 02:23:38 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@21 -- # val= 00:06:59.646 02:23:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # IFS=: 00:06:59.646 02:23:40 -- accel/accel.sh@20 -- # read -r var val 00:06:59.646 02:23:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.646 02:23:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.646 02:23:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.646 00:06:59.646 real 0m3.121s 00:06:59.646 user 0m2.660s 00:06:59.646 sys 0m0.258s 00:06:59.646 02:23:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.646 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 ************************************ 00:06:59.646 END TEST accel_deomp_full_mthread 00:06:59.646 ************************************ 00:06:59.646 02:23:40 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:59.646 02:23:40 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.646 02:23:40 -- accel/accel.sh@129 -- # build_accel_config 00:06:59.646 02:23:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.646 02:23:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:59.646 02:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.646 02:23:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.646 02:23:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.646 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:06:59.646 02:23:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.646 02:23:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.646 02:23:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.646 02:23:40 -- accel/accel.sh@42 -- # jq -r . 00:06:59.646 ************************************ 00:06:59.646 START TEST accel_dif_functional_tests 00:06:59.646 ************************************ 00:06:59.646 02:23:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.646 [2024-11-21 02:23:40.253323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.646 [2024-11-21 02:23:40.253394] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:06:59.906 [2024-11-21 02:23:40.383351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.906 [2024-11-21 02:23:40.468212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.906 [2024-11-21 02:23:40.468376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.906 [2024-11-21 02:23:40.468378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.165 00:07:00.165 00:07:00.165 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.165 http://cunit.sourceforge.net/ 00:07:00.165 00:07:00.165 00:07:00.165 Suite: accel_dif 00:07:00.165 Test: verify: DIF generated, GUARD check ...passed 00:07:00.165 Test: verify: DIF generated, APPTAG check ...passed 00:07:00.165 Test: verify: DIF generated, REFTAG check ...passed 00:07:00.165 Test: verify: DIF not generated, GUARD check ...passed 00:07:00.165 Test: verify: DIF not generated, APPTAG check ...passed 00:07:00.165 Test: verify: DIF not generated, REFTAG check ...[2024-11-21 02:23:40.584240] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:00.165 [2024-11-21 02:23:40.584383] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:00.165 [2024-11-21 02:23:40.584449] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:00.165 [2024-11-21 02:23:40.584488] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:00.165 passed 00:07:00.165 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:00.165 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-21 02:23:40.584534] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:00.165 [2024-11-21 02:23:40.584616] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:00.165 passed 00:07:00.165 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:00.165 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:00.165 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-11-21 02:23:40.584731] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:00.165 passed 00:07:00.165 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:00.165 Test: generate copy: DIF generated, GUARD check ...passed 00:07:00.165 Test: generate copy: DIF generated, APTTAG check ...[2024-11-21 02:23:40.585149] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:00.165 passed 00:07:00.165 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:00.165 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:00.165 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:00.165 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:00.165 Test: generate copy: iovecs-len validate ...passed 00:07:00.165 Test: generate copy: buffer alignment validate ...passed 00:07:00.165 00:07:00.166 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.166 suites 1 1 n/a 0 0 00:07:00.166 tests 20 20 20 0 0 00:07:00.166 asserts 204 204 204 0 n/a 00:07:00.166 00:07:00.166 Elapsed time = 0.005 seconds 00:07:00.166 [2024-11-21 02:23:40.585684] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:00.425 00:07:00.425 real 0m0.661s 00:07:00.425 user 0m0.975s 00:07:00.425 sys 0m0.166s 00:07:00.425 02:23:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.425 ************************************ 00:07:00.425 END TEST accel_dif_functional_tests 00:07:00.425 ************************************ 00:07:00.425 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.425 00:07:00.425 real 1m7.316s 00:07:00.425 user 1m11.143s 00:07:00.425 sys 0m7.208s 00:07:00.425 02:23:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.425 ************************************ 00:07:00.425 END TEST accel 00:07:00.425 ************************************ 00:07:00.425 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.425 02:23:40 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:00.425 02:23:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.425 02:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.425 02:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:00.425 ************************************ 00:07:00.425 START TEST accel_rpc 00:07:00.425 ************************************ 00:07:00.425 02:23:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:00.425 * Looking for test storage... 00:07:00.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:00.425 02:23:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:00.425 02:23:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:00.425 02:23:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:00.684 02:23:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:00.684 02:23:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:00.684 02:23:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:00.684 02:23:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:00.684 02:23:41 -- scripts/common.sh@335 -- # IFS=.-: 00:07:00.684 02:23:41 -- scripts/common.sh@335 -- # read -ra ver1 00:07:00.684 02:23:41 -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.684 02:23:41 -- scripts/common.sh@336 -- # read -ra ver2 00:07:00.684 02:23:41 -- scripts/common.sh@337 -- # local 'op=<' 00:07:00.684 02:23:41 -- scripts/common.sh@339 -- # ver1_l=2 00:07:00.684 02:23:41 -- scripts/common.sh@340 -- # ver2_l=1 00:07:00.684 02:23:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:00.684 02:23:41 -- scripts/common.sh@343 -- # case "$op" in 00:07:00.684 02:23:41 -- scripts/common.sh@344 -- # : 1 00:07:00.684 02:23:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:00.684 02:23:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.684 02:23:41 -- scripts/common.sh@364 -- # decimal 1 00:07:00.684 02:23:41 -- scripts/common.sh@352 -- # local d=1 00:07:00.684 02:23:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.684 02:23:41 -- scripts/common.sh@354 -- # echo 1 00:07:00.684 02:23:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:00.684 02:23:41 -- scripts/common.sh@365 -- # decimal 2 00:07:00.684 02:23:41 -- scripts/common.sh@352 -- # local d=2 00:07:00.684 02:23:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.684 02:23:41 -- scripts/common.sh@354 -- # echo 2 00:07:00.684 02:23:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:00.684 02:23:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:00.684 02:23:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:00.684 02:23:41 -- scripts/common.sh@367 -- # return 0 00:07:00.684 02:23:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.684 02:23:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.684 --rc genhtml_branch_coverage=1 00:07:00.684 --rc genhtml_function_coverage=1 00:07:00.684 --rc genhtml_legend=1 00:07:00.684 --rc geninfo_all_blocks=1 00:07:00.684 --rc geninfo_unexecuted_blocks=1 00:07:00.684 00:07:00.684 ' 00:07:00.684 02:23:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.684 --rc genhtml_branch_coverage=1 00:07:00.684 --rc genhtml_function_coverage=1 00:07:00.684 --rc genhtml_legend=1 00:07:00.684 --rc geninfo_all_blocks=1 00:07:00.684 --rc geninfo_unexecuted_blocks=1 00:07:00.684 00:07:00.684 ' 00:07:00.684 02:23:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:00.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.684 --rc genhtml_branch_coverage=1 00:07:00.684 --rc genhtml_function_coverage=1 00:07:00.684 --rc genhtml_legend=1 00:07:00.684 --rc geninfo_all_blocks=1 00:07:00.684 --rc geninfo_unexecuted_blocks=1 00:07:00.684 00:07:00.684 ' 00:07:00.685 02:23:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:00.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.685 --rc genhtml_branch_coverage=1 00:07:00.685 --rc genhtml_function_coverage=1 00:07:00.685 --rc genhtml_legend=1 00:07:00.685 --rc geninfo_all_blocks=1 00:07:00.685 --rc geninfo_unexecuted_blocks=1 00:07:00.685 00:07:00.685 ' 00:07:00.685 02:23:41 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:00.685 02:23:41 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59652 00:07:00.685 02:23:41 -- accel/accel_rpc.sh@15 -- # waitforlisten 59652 00:07:00.685 02:23:41 -- common/autotest_common.sh@829 -- # '[' -z 59652 ']' 00:07:00.685 02:23:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.685 02:23:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.685 02:23:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.685 02:23:41 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:00.685 02:23:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.685 02:23:41 -- common/autotest_common.sh@10 -- # set +x 00:07:00.685 [2024-11-21 02:23:41.197004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.685 [2024-11-21 02:23:41.197146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59652 ] 00:07:00.944 [2024-11-21 02:23:41.334269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.944 [2024-11-21 02:23:41.414758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.944 [2024-11-21 02:23:41.414940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.880 02:23:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.880 02:23:42 -- common/autotest_common.sh@862 -- # return 0 00:07:01.880 02:23:42 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:01.880 02:23:42 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:01.880 02:23:42 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:01.880 02:23:42 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:01.880 02:23:42 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:01.880 02:23:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.880 02:23:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.880 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:01.880 ************************************ 00:07:01.880 START TEST accel_assign_opcode 00:07:01.881 ************************************ 00:07:01.881 02:23:42 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:01.881 02:23:42 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:01.881 02:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.881 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:01.881 [2024-11-21 02:23:42.187430] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:01.881 02:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.881 02:23:42 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:01.881 02:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.881 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:01.881 [2024-11-21 02:23:42.195426] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:01.881 02:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.881 02:23:42 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:01.881 02:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.881 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:01.881 02:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.881 02:23:42 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:01.881 02:23:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.881 02:23:42 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:01.881 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:01.881 02:23:42 -- accel/accel_rpc.sh@42 -- # grep software 00:07:01.881 02:23:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.140 software 00:07:02.140 00:07:02.140 real 0m0.352s 00:07:02.140 user 0m0.057s 00:07:02.140 sys 0m0.010s 00:07:02.140 02:23:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.140 ************************************ 00:07:02.140 END TEST accel_assign_opcode 00:07:02.140 ************************************ 00:07:02.140 02:23:42 -- common/autotest_common.sh@10 -- # set +x 00:07:02.140 02:23:42 -- accel/accel_rpc.sh@55 -- # killprocess 59652 00:07:02.140 02:23:42 -- common/autotest_common.sh@936 -- # '[' -z 59652 ']' 00:07:02.140 02:23:42 -- common/autotest_common.sh@940 -- # kill -0 59652 00:07:02.140 02:23:42 -- common/autotest_common.sh@941 -- # uname 00:07:02.140 02:23:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.140 02:23:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59652 00:07:02.140 02:23:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.140 killing process with pid 59652 00:07:02.140 02:23:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.140 02:23:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59652' 00:07:02.140 02:23:42 -- common/autotest_common.sh@955 -- # kill 59652 00:07:02.140 02:23:42 -- common/autotest_common.sh@960 -- # wait 59652 00:07:02.707 00:07:02.707 real 0m2.178s 00:07:02.707 user 0m2.209s 00:07:02.707 sys 0m0.547s 00:07:02.707 02:23:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.707 ************************************ 00:07:02.707 END TEST accel_rpc 00:07:02.707 ************************************ 00:07:02.708 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:07:02.708 02:23:43 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:02.708 02:23:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.708 02:23:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.708 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:07:02.708 ************************************ 00:07:02.708 START TEST app_cmdline 00:07:02.708 ************************************ 00:07:02.708 02:23:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:02.708 * Looking for test storage... 00:07:02.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:02.708 02:23:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:02.708 02:23:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:02.708 02:23:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:02.967 02:23:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:02.967 02:23:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:02.967 02:23:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:02.967 02:23:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:02.967 02:23:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:02.967 02:23:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:02.967 02:23:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.967 02:23:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:02.967 02:23:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:02.967 02:23:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:02.967 02:23:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:02.967 02:23:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:02.967 02:23:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:02.967 02:23:43 -- scripts/common.sh@344 -- # : 1 00:07:02.967 02:23:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:02.967 02:23:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.967 02:23:43 -- scripts/common.sh@364 -- # decimal 1 00:07:02.967 02:23:43 -- scripts/common.sh@352 -- # local d=1 00:07:02.967 02:23:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.967 02:23:43 -- scripts/common.sh@354 -- # echo 1 00:07:02.967 02:23:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:02.967 02:23:43 -- scripts/common.sh@365 -- # decimal 2 00:07:02.967 02:23:43 -- scripts/common.sh@352 -- # local d=2 00:07:02.967 02:23:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.967 02:23:43 -- scripts/common.sh@354 -- # echo 2 00:07:02.967 02:23:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:02.967 02:23:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:02.967 02:23:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:02.967 02:23:43 -- scripts/common.sh@367 -- # return 0 00:07:02.967 02:23:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.967 02:23:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:02.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.967 --rc genhtml_branch_coverage=1 00:07:02.967 --rc genhtml_function_coverage=1 00:07:02.967 --rc genhtml_legend=1 00:07:02.967 --rc geninfo_all_blocks=1 00:07:02.967 --rc geninfo_unexecuted_blocks=1 00:07:02.967 00:07:02.967 ' 00:07:02.967 02:23:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:02.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.967 --rc genhtml_branch_coverage=1 00:07:02.967 --rc genhtml_function_coverage=1 00:07:02.967 --rc genhtml_legend=1 00:07:02.967 --rc geninfo_all_blocks=1 00:07:02.967 --rc geninfo_unexecuted_blocks=1 00:07:02.967 00:07:02.967 ' 00:07:02.967 02:23:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:02.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.968 --rc genhtml_branch_coverage=1 00:07:02.968 --rc genhtml_function_coverage=1 00:07:02.968 --rc genhtml_legend=1 00:07:02.968 --rc geninfo_all_blocks=1 00:07:02.968 --rc geninfo_unexecuted_blocks=1 00:07:02.968 00:07:02.968 ' 00:07:02.968 02:23:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:02.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.968 --rc genhtml_branch_coverage=1 00:07:02.968 --rc genhtml_function_coverage=1 00:07:02.968 --rc genhtml_legend=1 00:07:02.968 --rc geninfo_all_blocks=1 00:07:02.968 --rc geninfo_unexecuted_blocks=1 00:07:02.968 00:07:02.968 ' 00:07:02.968 02:23:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:02.968 02:23:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=59770 00:07:02.968 02:23:43 -- app/cmdline.sh@18 -- # waitforlisten 59770 00:07:02.968 02:23:43 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:02.968 02:23:43 -- common/autotest_common.sh@829 -- # '[' -z 59770 ']' 00:07:02.968 02:23:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.968 02:23:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.968 02:23:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.968 02:23:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.968 02:23:43 -- common/autotest_common.sh@10 -- # set +x 00:07:02.968 [2024-11-21 02:23:43.447274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.968 [2024-11-21 02:23:43.447412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59770 ] 00:07:02.968 [2024-11-21 02:23:43.582808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.227 [2024-11-21 02:23:43.665266] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.227 [2024-11-21 02:23:43.665454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.795 02:23:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.795 02:23:44 -- common/autotest_common.sh@862 -- # return 0 00:07:03.795 02:23:44 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:04.054 { 00:07:04.054 "fields": { 00:07:04.054 "commit": "c13c99a5e", 00:07:04.054 "major": 24, 00:07:04.054 "minor": 1, 00:07:04.054 "patch": 1, 00:07:04.054 "suffix": "-pre" 00:07:04.054 }, 00:07:04.054 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:04.054 } 00:07:04.054 02:23:44 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.054 02:23:44 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.054 02:23:44 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.054 02:23:44 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.054 02:23:44 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.054 02:23:44 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.054 02:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.054 02:23:44 -- app/cmdline.sh@26 -- # sort 00:07:04.054 02:23:44 -- common/autotest_common.sh@10 -- # set +x 00:07:04.054 02:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.313 02:23:44 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.313 02:23:44 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.313 02:23:44 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.313 02:23:44 -- common/autotest_common.sh@650 -- # local es=0 00:07:04.313 02:23:44 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.313 02:23:44 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.313 02:23:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.313 02:23:44 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.313 02:23:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.313 02:23:44 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.313 02:23:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.313 02:23:44 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.313 02:23:44 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:04.313 02:23:44 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.572 2024/11/21 02:23:44 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:04.572 request: 00:07:04.572 { 00:07:04.572 "method": "env_dpdk_get_mem_stats", 00:07:04.572 "params": {} 00:07:04.572 } 00:07:04.572 Got JSON-RPC error response 00:07:04.572 GoRPCClient: error on JSON-RPC call 00:07:04.572 02:23:44 -- common/autotest_common.sh@653 -- # es=1 00:07:04.572 02:23:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.572 02:23:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.572 02:23:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.572 02:23:44 -- app/cmdline.sh@1 -- # killprocess 59770 00:07:04.572 02:23:44 -- common/autotest_common.sh@936 -- # '[' -z 59770 ']' 00:07:04.572 02:23:44 -- common/autotest_common.sh@940 -- # kill -0 59770 00:07:04.572 02:23:45 -- common/autotest_common.sh@941 -- # uname 00:07:04.572 02:23:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.572 02:23:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59770 00:07:04.572 02:23:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.572 killing process with pid 59770 00:07:04.572 02:23:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.572 02:23:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59770' 00:07:04.572 02:23:45 -- common/autotest_common.sh@955 -- # kill 59770 00:07:04.572 02:23:45 -- common/autotest_common.sh@960 -- # wait 59770 00:07:05.139 00:07:05.139 real 0m2.363s 00:07:05.139 user 0m2.816s 00:07:05.139 sys 0m0.573s 00:07:05.139 02:23:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.139 ************************************ 00:07:05.139 END TEST app_cmdline 00:07:05.139 ************************************ 00:07:05.139 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:05.139 02:23:45 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.139 02:23:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.139 02:23:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.139 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:05.139 ************************************ 00:07:05.139 START TEST version 00:07:05.139 ************************************ 00:07:05.139 02:23:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.139 * Looking for test storage... 00:07:05.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.139 02:23:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.139 02:23:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.139 02:23:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.398 02:23:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.398 02:23:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.398 02:23:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.398 02:23:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.398 02:23:45 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.398 02:23:45 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.398 02:23:45 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.398 02:23:45 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.398 02:23:45 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.398 02:23:45 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.398 02:23:45 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.398 02:23:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.398 02:23:45 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.398 02:23:45 -- scripts/common.sh@344 -- # : 1 00:07:05.398 02:23:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.398 02:23:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.398 02:23:45 -- scripts/common.sh@364 -- # decimal 1 00:07:05.398 02:23:45 -- scripts/common.sh@352 -- # local d=1 00:07:05.398 02:23:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.398 02:23:45 -- scripts/common.sh@354 -- # echo 1 00:07:05.398 02:23:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.398 02:23:45 -- scripts/common.sh@365 -- # decimal 2 00:07:05.398 02:23:45 -- scripts/common.sh@352 -- # local d=2 00:07:05.398 02:23:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.398 02:23:45 -- scripts/common.sh@354 -- # echo 2 00:07:05.398 02:23:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.398 02:23:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.398 02:23:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.398 02:23:45 -- scripts/common.sh@367 -- # return 0 00:07:05.398 02:23:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.398 02:23:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.398 --rc genhtml_branch_coverage=1 00:07:05.398 --rc genhtml_function_coverage=1 00:07:05.398 --rc genhtml_legend=1 00:07:05.398 --rc geninfo_all_blocks=1 00:07:05.398 --rc geninfo_unexecuted_blocks=1 00:07:05.398 00:07:05.398 ' 00:07:05.398 02:23:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.398 --rc genhtml_branch_coverage=1 00:07:05.398 --rc genhtml_function_coverage=1 00:07:05.398 --rc genhtml_legend=1 00:07:05.398 --rc geninfo_all_blocks=1 00:07:05.398 --rc geninfo_unexecuted_blocks=1 00:07:05.398 00:07:05.398 ' 00:07:05.398 02:23:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.398 --rc genhtml_branch_coverage=1 00:07:05.398 --rc genhtml_function_coverage=1 00:07:05.398 --rc genhtml_legend=1 00:07:05.398 --rc geninfo_all_blocks=1 00:07:05.398 --rc geninfo_unexecuted_blocks=1 00:07:05.398 00:07:05.398 ' 00:07:05.398 02:23:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.398 --rc genhtml_branch_coverage=1 00:07:05.398 --rc genhtml_function_coverage=1 00:07:05.398 --rc genhtml_legend=1 00:07:05.398 --rc geninfo_all_blocks=1 00:07:05.398 --rc geninfo_unexecuted_blocks=1 00:07:05.398 00:07:05.398 ' 00:07:05.398 02:23:45 -- app/version.sh@17 -- # get_header_version major 00:07:05.398 02:23:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.398 02:23:45 -- app/version.sh@14 -- # tr -d '"' 00:07:05.398 02:23:45 -- app/version.sh@14 -- # cut -f2 00:07:05.398 02:23:45 -- app/version.sh@17 -- # major=24 00:07:05.398 02:23:45 -- app/version.sh@18 -- # get_header_version minor 00:07:05.398 02:23:45 -- app/version.sh@14 -- # cut -f2 00:07:05.398 02:23:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.398 02:23:45 -- app/version.sh@14 -- # tr -d '"' 00:07:05.398 02:23:45 -- app/version.sh@18 -- # minor=1 00:07:05.398 02:23:45 -- app/version.sh@19 -- # get_header_version patch 00:07:05.398 02:23:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.398 02:23:45 -- app/version.sh@14 -- # cut -f2 00:07:05.398 02:23:45 -- app/version.sh@14 -- # tr -d '"' 00:07:05.398 02:23:45 -- app/version.sh@19 -- # patch=1 00:07:05.398 02:23:45 -- app/version.sh@20 -- # get_header_version suffix 00:07:05.398 02:23:45 -- app/version.sh@14 -- # cut -f2 00:07:05.399 02:23:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.399 02:23:45 -- app/version.sh@14 -- # tr -d '"' 00:07:05.399 02:23:45 -- app/version.sh@20 -- # suffix=-pre 00:07:05.399 02:23:45 -- app/version.sh@22 -- # version=24.1 00:07:05.399 02:23:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.399 02:23:45 -- app/version.sh@25 -- # version=24.1.1 00:07:05.399 02:23:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:05.399 02:23:45 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:05.399 02:23:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.399 02:23:45 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:05.399 02:23:45 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:05.399 00:07:05.399 real 0m0.258s 00:07:05.399 user 0m0.174s 00:07:05.399 sys 0m0.123s 00:07:05.399 02:23:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.399 ************************************ 00:07:05.399 END TEST version 00:07:05.399 ************************************ 00:07:05.399 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 02:23:45 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@191 -- # uname -s 00:07:05.399 02:23:45 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:05.399 02:23:45 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:05.399 02:23:45 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:05.399 02:23:45 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:05.399 02:23:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:05.399 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 02:23:45 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:05.399 02:23:45 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:05.399 02:23:45 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.399 02:23:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.399 02:23:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.399 02:23:45 -- common/autotest_common.sh@10 -- # set +x 00:07:05.399 ************************************ 00:07:05.399 START TEST nvmf_tcp 00:07:05.399 ************************************ 00:07:05.399 02:23:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.399 * Looking for test storage... 00:07:05.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:05.657 02:23:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.657 02:23:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.657 02:23:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.657 02:23:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.657 02:23:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.657 02:23:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.657 02:23:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.657 02:23:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.657 02:23:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.657 02:23:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.657 02:23:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.657 02:23:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.657 02:23:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.657 02:23:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.657 02:23:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.657 02:23:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.657 02:23:46 -- scripts/common.sh@344 -- # : 1 00:07:05.657 02:23:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.658 02:23:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.658 02:23:46 -- scripts/common.sh@364 -- # decimal 1 00:07:05.658 02:23:46 -- scripts/common.sh@352 -- # local d=1 00:07:05.658 02:23:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.658 02:23:46 -- scripts/common.sh@354 -- # echo 1 00:07:05.658 02:23:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.658 02:23:46 -- scripts/common.sh@365 -- # decimal 2 00:07:05.658 02:23:46 -- scripts/common.sh@352 -- # local d=2 00:07:05.658 02:23:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.658 02:23:46 -- scripts/common.sh@354 -- # echo 2 00:07:05.658 02:23:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.658 02:23:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.658 02:23:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.658 02:23:46 -- scripts/common.sh@367 -- # return 0 00:07:05.658 02:23:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.658 02:23:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.658 --rc genhtml_branch_coverage=1 00:07:05.658 --rc genhtml_function_coverage=1 00:07:05.658 --rc genhtml_legend=1 00:07:05.658 --rc geninfo_all_blocks=1 00:07:05.658 --rc geninfo_unexecuted_blocks=1 00:07:05.658 00:07:05.658 ' 00:07:05.658 02:23:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.658 --rc genhtml_branch_coverage=1 00:07:05.658 --rc genhtml_function_coverage=1 00:07:05.658 --rc genhtml_legend=1 00:07:05.658 --rc geninfo_all_blocks=1 00:07:05.658 --rc geninfo_unexecuted_blocks=1 00:07:05.658 00:07:05.658 ' 00:07:05.658 02:23:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.658 --rc genhtml_branch_coverage=1 00:07:05.658 --rc genhtml_function_coverage=1 00:07:05.658 --rc genhtml_legend=1 00:07:05.658 --rc geninfo_all_blocks=1 00:07:05.658 --rc geninfo_unexecuted_blocks=1 00:07:05.658 00:07:05.658 ' 00:07:05.658 02:23:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.658 --rc genhtml_branch_coverage=1 00:07:05.658 --rc genhtml_function_coverage=1 00:07:05.658 --rc genhtml_legend=1 00:07:05.658 --rc geninfo_all_blocks=1 00:07:05.658 --rc geninfo_unexecuted_blocks=1 00:07:05.658 00:07:05.658 ' 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.658 02:23:46 -- nvmf/common.sh@7 -- # uname -s 00:07:05.658 02:23:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.658 02:23:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.658 02:23:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.658 02:23:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.658 02:23:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.658 02:23:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.658 02:23:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.658 02:23:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.658 02:23:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.658 02:23:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.658 02:23:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:05.658 02:23:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:05.658 02:23:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.658 02:23:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.658 02:23:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:05.658 02:23:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.658 02:23:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.658 02:23:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.658 02:23:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.658 02:23:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 02:23:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 02:23:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 02:23:46 -- paths/export.sh@5 -- # export PATH 00:07:05.658 02:23:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.658 02:23:46 -- nvmf/common.sh@46 -- # : 0 00:07:05.658 02:23:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:05.658 02:23:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:05.658 02:23:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:05.658 02:23:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.658 02:23:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.658 02:23:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:05.658 02:23:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:05.658 02:23:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:05.658 02:23:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.658 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:05.658 02:23:46 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.658 02:23:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:05.658 02:23:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.658 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.658 ************************************ 00:07:05.658 START TEST nvmf_example 00:07:05.658 ************************************ 00:07:05.658 02:23:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.658 * Looking for test storage... 00:07:05.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:05.658 02:23:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.658 02:23:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.658 02:23:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.918 02:23:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.918 02:23:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.918 02:23:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.918 02:23:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.918 02:23:46 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.918 02:23:46 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.918 02:23:46 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.918 02:23:46 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.918 02:23:46 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.918 02:23:46 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.918 02:23:46 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.918 02:23:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.918 02:23:46 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.918 02:23:46 -- scripts/common.sh@344 -- # : 1 00:07:05.918 02:23:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.918 02:23:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.918 02:23:46 -- scripts/common.sh@364 -- # decimal 1 00:07:05.918 02:23:46 -- scripts/common.sh@352 -- # local d=1 00:07:05.918 02:23:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.918 02:23:46 -- scripts/common.sh@354 -- # echo 1 00:07:05.918 02:23:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.918 02:23:46 -- scripts/common.sh@365 -- # decimal 2 00:07:05.918 02:23:46 -- scripts/common.sh@352 -- # local d=2 00:07:05.918 02:23:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.918 02:23:46 -- scripts/common.sh@354 -- # echo 2 00:07:05.918 02:23:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.918 02:23:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.918 02:23:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.918 02:23:46 -- scripts/common.sh@367 -- # return 0 00:07:05.918 02:23:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.918 02:23:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.918 --rc genhtml_branch_coverage=1 00:07:05.918 --rc genhtml_function_coverage=1 00:07:05.918 --rc genhtml_legend=1 00:07:05.918 --rc geninfo_all_blocks=1 00:07:05.918 --rc geninfo_unexecuted_blocks=1 00:07:05.918 00:07:05.918 ' 00:07:05.918 02:23:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.918 --rc genhtml_branch_coverage=1 00:07:05.918 --rc genhtml_function_coverage=1 00:07:05.918 --rc genhtml_legend=1 00:07:05.918 --rc geninfo_all_blocks=1 00:07:05.918 --rc geninfo_unexecuted_blocks=1 00:07:05.918 00:07:05.918 ' 00:07:05.918 02:23:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.918 --rc genhtml_branch_coverage=1 00:07:05.918 --rc genhtml_function_coverage=1 00:07:05.918 --rc genhtml_legend=1 00:07:05.918 --rc geninfo_all_blocks=1 00:07:05.918 --rc geninfo_unexecuted_blocks=1 00:07:05.918 00:07:05.918 ' 00:07:05.918 02:23:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.918 --rc genhtml_branch_coverage=1 00:07:05.918 --rc genhtml_function_coverage=1 00:07:05.918 --rc genhtml_legend=1 00:07:05.918 --rc geninfo_all_blocks=1 00:07:05.918 --rc geninfo_unexecuted_blocks=1 00:07:05.918 00:07:05.918 ' 00:07:05.918 02:23:46 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:05.918 02:23:46 -- nvmf/common.sh@7 -- # uname -s 00:07:05.918 02:23:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.918 02:23:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.918 02:23:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.918 02:23:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.918 02:23:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.918 02:23:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.918 02:23:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.918 02:23:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.918 02:23:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.918 02:23:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.918 02:23:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:05.918 02:23:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:05.918 02:23:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.918 02:23:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.918 02:23:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:05.918 02:23:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.918 02:23:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.918 02:23:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.918 02:23:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.918 02:23:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.918 02:23:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.918 02:23:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.918 02:23:46 -- paths/export.sh@5 -- # export PATH 00:07:05.918 02:23:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.918 02:23:46 -- nvmf/common.sh@46 -- # : 0 00:07:05.918 02:23:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:05.918 02:23:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:05.918 02:23:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:05.918 02:23:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.918 02:23:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.918 02:23:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:05.918 02:23:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:05.918 02:23:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:05.918 02:23:46 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:05.918 02:23:46 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:05.918 02:23:46 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:05.918 02:23:46 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:05.918 02:23:46 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:05.918 02:23:46 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:05.918 02:23:46 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:05.918 02:23:46 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:05.918 02:23:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.918 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.918 02:23:46 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:05.918 02:23:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:05.918 02:23:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.918 02:23:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:05.918 02:23:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:05.918 02:23:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:05.918 02:23:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.918 02:23:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.918 02:23:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.918 02:23:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:05.918 02:23:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:05.918 02:23:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:05.918 02:23:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:05.918 02:23:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:05.918 02:23:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:05.918 02:23:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.918 02:23:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.918 02:23:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:05.918 02:23:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:05.918 02:23:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:05.918 02:23:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:05.918 02:23:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:05.918 02:23:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.918 02:23:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:05.918 02:23:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:05.919 02:23:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:05.919 02:23:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:05.919 02:23:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:05.919 Cannot find device "nvmf_init_br" 00:07:05.919 02:23:46 -- nvmf/common.sh@153 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:05.919 Cannot find device "nvmf_tgt_br" 00:07:05.919 02:23:46 -- nvmf/common.sh@154 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:05.919 Cannot find device "nvmf_tgt_br2" 00:07:05.919 02:23:46 -- nvmf/common.sh@155 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:05.919 Cannot find device "nvmf_init_br" 00:07:05.919 02:23:46 -- nvmf/common.sh@156 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:05.919 Cannot find device "nvmf_tgt_br" 00:07:05.919 02:23:46 -- nvmf/common.sh@157 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:05.919 Cannot find device "nvmf_tgt_br2" 00:07:05.919 02:23:46 -- nvmf/common.sh@158 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:05.919 Cannot find device "nvmf_br" 00:07:05.919 02:23:46 -- nvmf/common.sh@159 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:05.919 Cannot find device "nvmf_init_if" 00:07:05.919 02:23:46 -- nvmf/common.sh@160 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:05.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.919 02:23:46 -- nvmf/common.sh@161 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:05.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.919 02:23:46 -- nvmf/common.sh@162 -- # true 00:07:05.919 02:23:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:05.919 02:23:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:05.919 02:23:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:06.269 02:23:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:06.269 02:23:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:06.269 02:23:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:06.269 02:23:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:06.269 02:23:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:06.269 02:23:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:06.269 02:23:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:06.269 02:23:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:06.269 02:23:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:06.269 02:23:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:06.269 02:23:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:06.269 02:23:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:06.269 02:23:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:06.269 02:23:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:06.269 02:23:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:06.269 02:23:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:06.269 02:23:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:06.269 02:23:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:06.269 02:23:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:06.269 02:23:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:06.269 02:23:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:06.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:07:06.269 00:07:06.269 --- 10.0.0.2 ping statistics --- 00:07:06.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.269 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:07:06.269 02:23:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:06.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:06.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:07:06.269 00:07:06.269 --- 10.0.0.3 ping statistics --- 00:07:06.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.269 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:06.269 02:23:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:06.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:07:06.269 00:07:06.269 --- 10.0.0.1 ping statistics --- 00:07:06.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.269 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:06.269 02:23:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.269 02:23:46 -- nvmf/common.sh@421 -- # return 0 00:07:06.269 02:23:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:06.269 02:23:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.269 02:23:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:06.269 02:23:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:06.269 02:23:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.269 02:23:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:06.269 02:23:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:06.269 02:23:46 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:06.269 02:23:46 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:06.269 02:23:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:06.269 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:07:06.269 02:23:46 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:06.269 02:23:46 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:06.269 02:23:46 -- target/nvmf_example.sh@34 -- # nvmfpid=60144 00:07:06.269 02:23:46 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:06.269 02:23:46 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.269 02:23:46 -- target/nvmf_example.sh@36 -- # waitforlisten 60144 00:07:06.269 02:23:46 -- common/autotest_common.sh@829 -- # '[' -z 60144 ']' 00:07:06.269 02:23:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.269 02:23:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.269 02:23:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.269 02:23:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.269 02:23:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.667 02:23:47 -- common/autotest_common.sh@862 -- # return 0 00:07:07.667 02:23:47 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:07.667 02:23:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.667 02:23:47 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:48 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.667 02:23:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.667 02:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.667 02:23:48 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:07.667 02:23:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.667 02:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.667 02:23:48 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:07.667 02:23:48 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:07.667 02:23:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.667 02:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.667 02:23:48 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:07.667 02:23:48 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:07.667 02:23:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.667 02:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.667 02:23:48 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.667 02:23:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.667 02:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:07.667 02:23:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.667 02:23:48 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:07.667 02:23:48 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:19.879 Initializing NVMe Controllers 00:07:19.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:19.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:19.879 Initialization complete. Launching workers. 00:07:19.879 ======================================================== 00:07:19.879 Latency(us) 00:07:19.879 Device Information : IOPS MiB/s Average min max 00:07:19.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16354.15 63.88 3912.94 665.73 21065.69 00:07:19.879 ======================================================== 00:07:19.879 Total : 16354.15 63.88 3912.94 665.73 21065.69 00:07:19.879 00:07:19.879 02:23:58 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:19.879 02:23:58 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:19.879 02:23:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:19.879 02:23:58 -- nvmf/common.sh@116 -- # sync 00:07:19.879 02:23:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:19.879 02:23:58 -- nvmf/common.sh@119 -- # set +e 00:07:19.879 02:23:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:19.879 02:23:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:19.879 rmmod nvme_tcp 00:07:19.879 rmmod nvme_fabrics 00:07:19.879 rmmod nvme_keyring 00:07:19.879 02:23:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:19.879 02:23:58 -- nvmf/common.sh@123 -- # set -e 00:07:19.879 02:23:58 -- nvmf/common.sh@124 -- # return 0 00:07:19.879 02:23:58 -- nvmf/common.sh@477 -- # '[' -n 60144 ']' 00:07:19.879 02:23:58 -- nvmf/common.sh@478 -- # killprocess 60144 00:07:19.879 02:23:58 -- common/autotest_common.sh@936 -- # '[' -z 60144 ']' 00:07:19.879 02:23:58 -- common/autotest_common.sh@940 -- # kill -0 60144 00:07:19.879 02:23:58 -- common/autotest_common.sh@941 -- # uname 00:07:19.879 02:23:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:19.879 02:23:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60144 00:07:19.879 killing process with pid 60144 00:07:19.879 02:23:58 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:19.879 02:23:58 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:19.879 02:23:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60144' 00:07:19.879 02:23:58 -- common/autotest_common.sh@955 -- # kill 60144 00:07:19.879 02:23:58 -- common/autotest_common.sh@960 -- # wait 60144 00:07:19.879 nvmf threads initialize successfully 00:07:19.879 bdev subsystem init successfully 00:07:19.879 created a nvmf target service 00:07:19.879 create targets's poll groups done 00:07:19.879 all subsystems of target started 00:07:19.879 nvmf target is running 00:07:19.879 all subsystems of target stopped 00:07:19.879 destroy targets's poll groups done 00:07:19.879 destroyed the nvmf target service 00:07:19.879 bdev subsystem finish successfully 00:07:19.879 nvmf threads destroy successfully 00:07:19.879 02:23:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:19.879 02:23:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:19.879 02:23:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:19.879 02:23:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.879 02:23:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:19.879 02:23:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.879 02:23:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.879 02:23:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.879 02:23:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:19.879 02:23:58 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:19.879 02:23:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:19.879 02:23:58 -- common/autotest_common.sh@10 -- # set +x 00:07:19.879 00:07:19.879 real 0m12.618s 00:07:19.879 user 0m45.037s 00:07:19.879 sys 0m2.013s 00:07:19.879 02:23:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.879 02:23:58 -- common/autotest_common.sh@10 -- # set +x 00:07:19.879 ************************************ 00:07:19.879 END TEST nvmf_example 00:07:19.879 ************************************ 00:07:19.879 02:23:58 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:19.879 02:23:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:19.879 02:23:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.879 02:23:58 -- common/autotest_common.sh@10 -- # set +x 00:07:19.879 ************************************ 00:07:19.879 START TEST nvmf_filesystem 00:07:19.879 ************************************ 00:07:19.879 02:23:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:19.879 * Looking for test storage... 00:07:19.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.879 02:23:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:19.879 02:23:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:19.879 02:23:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:19.879 02:23:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:19.879 02:23:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:19.879 02:23:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:19.879 02:23:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:19.879 02:23:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:19.879 02:23:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:19.879 02:23:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.879 02:23:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:19.879 02:23:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:19.879 02:23:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:19.879 02:23:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:19.879 02:23:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:19.879 02:23:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:19.879 02:23:59 -- scripts/common.sh@344 -- # : 1 00:07:19.879 02:23:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:19.879 02:23:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.879 02:23:59 -- scripts/common.sh@364 -- # decimal 1 00:07:19.879 02:23:59 -- scripts/common.sh@352 -- # local d=1 00:07:19.879 02:23:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.880 02:23:59 -- scripts/common.sh@354 -- # echo 1 00:07:19.880 02:23:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:19.880 02:23:59 -- scripts/common.sh@365 -- # decimal 2 00:07:19.880 02:23:59 -- scripts/common.sh@352 -- # local d=2 00:07:19.880 02:23:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.880 02:23:59 -- scripts/common.sh@354 -- # echo 2 00:07:19.880 02:23:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:19.880 02:23:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:19.880 02:23:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:19.880 02:23:59 -- scripts/common.sh@367 -- # return 0 00:07:19.880 02:23:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.880 02:23:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:19.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.880 --rc genhtml_branch_coverage=1 00:07:19.880 --rc genhtml_function_coverage=1 00:07:19.880 --rc genhtml_legend=1 00:07:19.880 --rc geninfo_all_blocks=1 00:07:19.880 --rc geninfo_unexecuted_blocks=1 00:07:19.880 00:07:19.880 ' 00:07:19.880 02:23:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:19.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.880 --rc genhtml_branch_coverage=1 00:07:19.880 --rc genhtml_function_coverage=1 00:07:19.880 --rc genhtml_legend=1 00:07:19.880 --rc geninfo_all_blocks=1 00:07:19.880 --rc geninfo_unexecuted_blocks=1 00:07:19.880 00:07:19.880 ' 00:07:19.880 02:23:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:19.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.880 --rc genhtml_branch_coverage=1 00:07:19.880 --rc genhtml_function_coverage=1 00:07:19.880 --rc genhtml_legend=1 00:07:19.880 --rc geninfo_all_blocks=1 00:07:19.880 --rc geninfo_unexecuted_blocks=1 00:07:19.880 00:07:19.880 ' 00:07:19.880 02:23:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:19.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.880 --rc genhtml_branch_coverage=1 00:07:19.880 --rc genhtml_function_coverage=1 00:07:19.880 --rc genhtml_legend=1 00:07:19.880 --rc geninfo_all_blocks=1 00:07:19.880 --rc geninfo_unexecuted_blocks=1 00:07:19.880 00:07:19.880 ' 00:07:19.880 02:23:59 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:07:19.880 02:23:59 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:19.880 02:23:59 -- common/autotest_common.sh@34 -- # set -e 00:07:19.880 02:23:59 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:19.880 02:23:59 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:19.880 02:23:59 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:19.880 02:23:59 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:19.880 02:23:59 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:19.880 02:23:59 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:19.880 02:23:59 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:19.880 02:23:59 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:19.880 02:23:59 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:19.880 02:23:59 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:19.880 02:23:59 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:19.880 02:23:59 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:19.880 02:23:59 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:19.880 02:23:59 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:19.880 02:23:59 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:19.880 02:23:59 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:19.880 02:23:59 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:19.880 02:23:59 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:19.880 02:23:59 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:19.880 02:23:59 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:19.880 02:23:59 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:19.880 02:23:59 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:19.880 02:23:59 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:19.880 02:23:59 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:19.880 02:23:59 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:19.880 02:23:59 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:19.880 02:23:59 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:19.880 02:23:59 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:19.880 02:23:59 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:19.880 02:23:59 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:19.880 02:23:59 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:19.880 02:23:59 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:19.880 02:23:59 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:19.880 02:23:59 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:19.880 02:23:59 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:19.880 02:23:59 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:19.880 02:23:59 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:19.880 02:23:59 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:19.880 02:23:59 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:19.880 02:23:59 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:19.880 02:23:59 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:19.880 02:23:59 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:19.880 02:23:59 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:19.880 02:23:59 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:19.880 02:23:59 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:19.880 02:23:59 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:19.880 02:23:59 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:19.880 02:23:59 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:19.880 02:23:59 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:19.880 02:23:59 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:19.880 02:23:59 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:19.880 02:23:59 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:19.880 02:23:59 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:19.880 02:23:59 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:19.880 02:23:59 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:19.880 02:23:59 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:07:19.880 02:23:59 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:19.880 02:23:59 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:19.880 02:23:59 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:19.880 02:23:59 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:19.880 02:23:59 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:19.880 02:23:59 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:19.880 02:23:59 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:19.880 02:23:59 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:19.880 02:23:59 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:19.880 02:23:59 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:07:19.880 02:23:59 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:19.880 02:23:59 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:19.880 02:23:59 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:19.880 02:23:59 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:19.880 02:23:59 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:19.880 02:23:59 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:19.880 02:23:59 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:19.880 02:23:59 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:19.880 02:23:59 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:19.880 02:23:59 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:19.880 02:23:59 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:19.880 02:23:59 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:19.880 02:23:59 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:07:19.880 02:23:59 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:07:19.880 02:23:59 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:07:19.880 02:23:59 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:07:19.880 02:23:59 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:07:19.880 02:23:59 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:07:19.880 02:23:59 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:07:19.880 02:23:59 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:19.880 02:23:59 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:19.880 02:23:59 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:19.880 02:23:59 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:19.880 02:23:59 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:19.880 02:23:59 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:19.880 02:23:59 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:07:19.880 02:23:59 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:19.880 #define SPDK_CONFIG_H 00:07:19.880 #define SPDK_CONFIG_APPS 1 00:07:19.880 #define SPDK_CONFIG_ARCH native 00:07:19.880 #undef SPDK_CONFIG_ASAN 00:07:19.880 #define SPDK_CONFIG_AVAHI 1 00:07:19.880 #undef SPDK_CONFIG_CET 00:07:19.880 #define SPDK_CONFIG_COVERAGE 1 00:07:19.880 #define SPDK_CONFIG_CROSS_PREFIX 00:07:19.880 #undef SPDK_CONFIG_CRYPTO 00:07:19.880 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:19.880 #undef SPDK_CONFIG_CUSTOMOCF 00:07:19.880 #undef SPDK_CONFIG_DAOS 00:07:19.880 #define SPDK_CONFIG_DAOS_DIR 00:07:19.880 #define SPDK_CONFIG_DEBUG 1 00:07:19.880 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:19.880 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:19.880 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:19.881 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:19.881 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:19.881 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:19.881 #define SPDK_CONFIG_EXAMPLES 1 00:07:19.881 #undef SPDK_CONFIG_FC 00:07:19.881 #define SPDK_CONFIG_FC_PATH 00:07:19.881 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:19.881 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:19.881 #undef SPDK_CONFIG_FUSE 00:07:19.881 #undef SPDK_CONFIG_FUZZER 00:07:19.881 #define SPDK_CONFIG_FUZZER_LIB 00:07:19.881 #define SPDK_CONFIG_GOLANG 1 00:07:19.881 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:19.881 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:19.881 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:19.881 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:19.881 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:19.881 #define SPDK_CONFIG_IDXD 1 00:07:19.881 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:19.881 #undef SPDK_CONFIG_IPSEC_MB 00:07:19.881 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:19.881 #define SPDK_CONFIG_ISAL 1 00:07:19.881 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:19.881 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:19.881 #define SPDK_CONFIG_LIBDIR 00:07:19.881 #undef SPDK_CONFIG_LTO 00:07:19.881 #define SPDK_CONFIG_MAX_LCORES 00:07:19.881 #define SPDK_CONFIG_NVME_CUSE 1 00:07:19.881 #undef SPDK_CONFIG_OCF 00:07:19.881 #define SPDK_CONFIG_OCF_PATH 00:07:19.881 #define SPDK_CONFIG_OPENSSL_PATH 00:07:19.881 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:19.881 #undef SPDK_CONFIG_PGO_USE 00:07:19.881 #define SPDK_CONFIG_PREFIX /usr/local 00:07:19.881 #undef SPDK_CONFIG_RAID5F 00:07:19.881 #undef SPDK_CONFIG_RBD 00:07:19.881 #define SPDK_CONFIG_RDMA 1 00:07:19.881 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:19.881 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:19.881 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:19.881 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:19.881 #define SPDK_CONFIG_SHARED 1 00:07:19.881 #undef SPDK_CONFIG_SMA 00:07:19.881 #define SPDK_CONFIG_TESTS 1 00:07:19.881 #undef SPDK_CONFIG_TSAN 00:07:19.881 #define SPDK_CONFIG_UBLK 1 00:07:19.881 #define SPDK_CONFIG_UBSAN 1 00:07:19.881 #undef SPDK_CONFIG_UNIT_TESTS 00:07:19.881 #undef SPDK_CONFIG_URING 00:07:19.881 #define SPDK_CONFIG_URING_PATH 00:07:19.881 #undef SPDK_CONFIG_URING_ZNS 00:07:19.881 #define SPDK_CONFIG_USDT 1 00:07:19.881 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:19.881 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:19.881 #define SPDK_CONFIG_VFIO_USER 1 00:07:19.881 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:19.881 #define SPDK_CONFIG_VHOST 1 00:07:19.881 #define SPDK_CONFIG_VIRTIO 1 00:07:19.881 #undef SPDK_CONFIG_VTUNE 00:07:19.881 #define SPDK_CONFIG_VTUNE_DIR 00:07:19.881 #define SPDK_CONFIG_WERROR 1 00:07:19.881 #define SPDK_CONFIG_WPDK_DIR 00:07:19.881 #undef SPDK_CONFIG_XNVME 00:07:19.881 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:19.881 02:23:59 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:19.881 02:23:59 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.881 02:23:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.881 02:23:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.881 02:23:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.881 02:23:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.881 02:23:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.881 02:23:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.881 02:23:59 -- paths/export.sh@5 -- # export PATH 00:07:19.881 02:23:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.881 02:23:59 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:19.881 02:23:59 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:07:19.881 02:23:59 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:19.881 02:23:59 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:07:19.881 02:23:59 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:07:19.881 02:23:59 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:07:19.881 02:23:59 -- pm/common@16 -- # TEST_TAG=N/A 00:07:19.881 02:23:59 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:07:19.881 02:23:59 -- common/autotest_common.sh@52 -- # : 1 00:07:19.881 02:23:59 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:19.881 02:23:59 -- common/autotest_common.sh@56 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:19.881 02:23:59 -- common/autotest_common.sh@58 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:19.881 02:23:59 -- common/autotest_common.sh@60 -- # : 1 00:07:19.881 02:23:59 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:19.881 02:23:59 -- common/autotest_common.sh@62 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:19.881 02:23:59 -- common/autotest_common.sh@64 -- # : 00:07:19.881 02:23:59 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:19.881 02:23:59 -- common/autotest_common.sh@66 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:19.881 02:23:59 -- common/autotest_common.sh@68 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:19.881 02:23:59 -- common/autotest_common.sh@70 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:19.881 02:23:59 -- common/autotest_common.sh@72 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:19.881 02:23:59 -- common/autotest_common.sh@74 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:19.881 02:23:59 -- common/autotest_common.sh@76 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:19.881 02:23:59 -- common/autotest_common.sh@78 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:19.881 02:23:59 -- common/autotest_common.sh@80 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:19.881 02:23:59 -- common/autotest_common.sh@82 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:19.881 02:23:59 -- common/autotest_common.sh@84 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:19.881 02:23:59 -- common/autotest_common.sh@86 -- # : 1 00:07:19.881 02:23:59 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:19.881 02:23:59 -- common/autotest_common.sh@88 -- # : 1 00:07:19.881 02:23:59 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:19.881 02:23:59 -- common/autotest_common.sh@90 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:19.881 02:23:59 -- common/autotest_common.sh@92 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:19.881 02:23:59 -- common/autotest_common.sh@94 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:19.881 02:23:59 -- common/autotest_common.sh@96 -- # : tcp 00:07:19.881 02:23:59 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:19.881 02:23:59 -- common/autotest_common.sh@98 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:19.881 02:23:59 -- common/autotest_common.sh@100 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:19.881 02:23:59 -- common/autotest_common.sh@102 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:19.881 02:23:59 -- common/autotest_common.sh@104 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:19.881 02:23:59 -- common/autotest_common.sh@106 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:19.881 02:23:59 -- common/autotest_common.sh@108 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:19.881 02:23:59 -- common/autotest_common.sh@110 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:19.881 02:23:59 -- common/autotest_common.sh@112 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:19.881 02:23:59 -- common/autotest_common.sh@114 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:19.881 02:23:59 -- common/autotest_common.sh@116 -- # : 1 00:07:19.881 02:23:59 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:19.881 02:23:59 -- common/autotest_common.sh@118 -- # : 00:07:19.881 02:23:59 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:19.881 02:23:59 -- common/autotest_common.sh@120 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:19.881 02:23:59 -- common/autotest_common.sh@122 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:19.881 02:23:59 -- common/autotest_common.sh@124 -- # : 0 00:07:19.881 02:23:59 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:19.882 02:23:59 -- common/autotest_common.sh@126 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:19.882 02:23:59 -- common/autotest_common.sh@128 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:19.882 02:23:59 -- common/autotest_common.sh@130 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:19.882 02:23:59 -- common/autotest_common.sh@132 -- # : 00:07:19.882 02:23:59 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:19.882 02:23:59 -- common/autotest_common.sh@134 -- # : true 00:07:19.882 02:23:59 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:19.882 02:23:59 -- common/autotest_common.sh@136 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:19.882 02:23:59 -- common/autotest_common.sh@138 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:19.882 02:23:59 -- common/autotest_common.sh@140 -- # : 1 00:07:19.882 02:23:59 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:19.882 02:23:59 -- common/autotest_common.sh@142 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:19.882 02:23:59 -- common/autotest_common.sh@144 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:19.882 02:23:59 -- common/autotest_common.sh@146 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:19.882 02:23:59 -- common/autotest_common.sh@148 -- # : 00:07:19.882 02:23:59 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:19.882 02:23:59 -- common/autotest_common.sh@150 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:19.882 02:23:59 -- common/autotest_common.sh@152 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:19.882 02:23:59 -- common/autotest_common.sh@154 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:19.882 02:23:59 -- common/autotest_common.sh@156 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:19.882 02:23:59 -- common/autotest_common.sh@158 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:19.882 02:23:59 -- common/autotest_common.sh@160 -- # : 0 00:07:19.882 02:23:59 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:19.882 02:23:59 -- common/autotest_common.sh@163 -- # : 00:07:19.882 02:23:59 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:19.882 02:23:59 -- common/autotest_common.sh@165 -- # : 1 00:07:19.882 02:23:59 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:19.882 02:23:59 -- common/autotest_common.sh@167 -- # : 1 00:07:19.882 02:23:59 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:19.882 02:23:59 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:07:19.882 02:23:59 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.882 02:23:59 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:19.882 02:23:59 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:19.882 02:23:59 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:19.882 02:23:59 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:19.882 02:23:59 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:19.882 02:23:59 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.882 02:23:59 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:19.882 02:23:59 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.882 02:23:59 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:19.882 02:23:59 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:19.882 02:23:59 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:19.882 02:23:59 -- common/autotest_common.sh@196 -- # cat 00:07:19.882 02:23:59 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:19.882 02:23:59 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.882 02:23:59 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:19.882 02:23:59 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.882 02:23:59 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:19.882 02:23:59 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:19.882 02:23:59 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:19.882 02:23:59 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:19.882 02:23:59 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:07:19.882 02:23:59 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:19.882 02:23:59 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:07:19.882 02:23:59 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.882 02:23:59 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:19.882 02:23:59 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.882 02:23:59 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:19.882 02:23:59 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:19.882 02:23:59 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:07:19.882 02:23:59 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.882 02:23:59 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:19.882 02:23:59 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:19.882 02:23:59 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:19.882 02:23:59 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:19.882 02:23:59 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:19.882 02:23:59 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:07:19.882 02:23:59 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:19.882 02:23:59 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:19.882 02:23:59 -- common/autotest_common.sh@255 -- # lcov_opt= 00:07:19.882 02:23:59 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:19.882 02:23:59 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:19.882 02:23:59 -- common/autotest_common.sh@259 -- # valgrind= 00:07:19.882 02:23:59 -- common/autotest_common.sh@265 -- # uname -s 00:07:19.882 02:23:59 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:19.882 02:23:59 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:19.882 02:23:59 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:19.882 02:23:59 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:19.882 02:23:59 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:19.882 02:23:59 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:19.882 02:23:59 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:19.882 02:23:59 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:07:19.882 02:23:59 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:19.882 02:23:59 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:19.882 02:23:59 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:07:19.882 02:23:59 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:19.882 02:23:59 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:19.882 02:23:59 -- common/autotest_common.sh@301 -- # for i in "$@" 00:07:19.882 02:23:59 -- common/autotest_common.sh@302 -- # case "$i" in 00:07:19.882 02:23:59 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:07:19.882 02:23:59 -- common/autotest_common.sh@319 -- # [[ -z 60397 ]] 00:07:19.882 02:23:59 -- common/autotest_common.sh@319 -- # kill -0 60397 00:07:19.882 02:23:59 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:19.882 02:23:59 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:19.882 02:23:59 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:19.882 02:23:59 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:19.882 02:23:59 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:19.882 02:23:59 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:19.882 02:23:59 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:19.882 02:23:59 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:19.882 02:23:59 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.oSeBTZ 00:07:19.882 02:23:59 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:19.883 02:23:59 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.oSeBTZ/tests/target /tmp/spdk.oSeBTZ 00:07:19.883 02:23:59 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@328 -- # df -T 00:07:19.883 02:23:59 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=14016241664 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=5551316992 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265171968 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=14016241664 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=5551316992 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266294272 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266429440 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=135168 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253273600 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253285888 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:07:19.883 02:23:59 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # avails["$mount"]=98018119680 00:07:19.883 02:23:59 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:07:19.883 02:23:59 -- common/autotest_common.sh@364 -- # uses["$mount"]=1684660224 00:07:19.883 02:23:59 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:19.883 02:23:59 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:19.883 * Looking for test storage... 00:07:19.883 02:23:59 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:19.883 02:23:59 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:19.883 02:23:59 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:19.883 02:23:59 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.883 02:23:59 -- common/autotest_common.sh@373 -- # mount=/home 00:07:19.883 02:23:59 -- common/autotest_common.sh@375 -- # target_space=14016241664 00:07:19.883 02:23:59 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:19.883 02:23:59 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:19.883 02:23:59 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.883 02:23:59 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.883 02:23:59 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:19.883 02:23:59 -- common/autotest_common.sh@390 -- # return 0 00:07:19.883 02:23:59 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:19.883 02:23:59 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:19.883 02:23:59 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:19.883 02:23:59 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:19.883 02:23:59 -- common/autotest_common.sh@1682 -- # true 00:07:19.883 02:23:59 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:19.883 02:23:59 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@27 -- # exec 00:07:19.883 02:23:59 -- common/autotest_common.sh@29 -- # exec 00:07:19.883 02:23:59 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:19.883 02:23:59 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:19.883 02:23:59 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:19.883 02:23:59 -- common/autotest_common.sh@18 -- # set -x 00:07:19.883 02:23:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:19.883 02:23:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:19.883 02:23:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:19.883 02:23:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:19.883 02:23:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:19.883 02:23:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:19.883 02:23:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:19.883 02:23:59 -- scripts/common.sh@335 -- # IFS=.-: 00:07:19.883 02:23:59 -- scripts/common.sh@335 -- # read -ra ver1 00:07:19.883 02:23:59 -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.883 02:23:59 -- scripts/common.sh@336 -- # read -ra ver2 00:07:19.883 02:23:59 -- scripts/common.sh@337 -- # local 'op=<' 00:07:19.883 02:23:59 -- scripts/common.sh@339 -- # ver1_l=2 00:07:19.883 02:23:59 -- scripts/common.sh@340 -- # ver2_l=1 00:07:19.883 02:23:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:19.883 02:23:59 -- scripts/common.sh@343 -- # case "$op" in 00:07:19.883 02:23:59 -- scripts/common.sh@344 -- # : 1 00:07:19.883 02:23:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:19.883 02:23:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.883 02:23:59 -- scripts/common.sh@364 -- # decimal 1 00:07:19.883 02:23:59 -- scripts/common.sh@352 -- # local d=1 00:07:19.883 02:23:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.883 02:23:59 -- scripts/common.sh@354 -- # echo 1 00:07:19.883 02:23:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:19.883 02:23:59 -- scripts/common.sh@365 -- # decimal 2 00:07:19.883 02:23:59 -- scripts/common.sh@352 -- # local d=2 00:07:19.883 02:23:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.883 02:23:59 -- scripts/common.sh@354 -- # echo 2 00:07:19.883 02:23:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:19.883 02:23:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:19.883 02:23:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:19.883 02:23:59 -- scripts/common.sh@367 -- # return 0 00:07:19.883 02:23:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.883 02:23:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:19.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.883 --rc genhtml_branch_coverage=1 00:07:19.883 --rc genhtml_function_coverage=1 00:07:19.883 --rc genhtml_legend=1 00:07:19.883 --rc geninfo_all_blocks=1 00:07:19.883 --rc geninfo_unexecuted_blocks=1 00:07:19.883 00:07:19.883 ' 00:07:19.883 02:23:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:19.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.883 --rc genhtml_branch_coverage=1 00:07:19.883 --rc genhtml_function_coverage=1 00:07:19.883 --rc genhtml_legend=1 00:07:19.883 --rc geninfo_all_blocks=1 00:07:19.883 --rc geninfo_unexecuted_blocks=1 00:07:19.883 00:07:19.883 ' 00:07:19.884 02:23:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:19.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.884 --rc genhtml_branch_coverage=1 00:07:19.884 --rc genhtml_function_coverage=1 00:07:19.884 --rc genhtml_legend=1 00:07:19.884 --rc geninfo_all_blocks=1 00:07:19.884 --rc geninfo_unexecuted_blocks=1 00:07:19.884 00:07:19.884 ' 00:07:19.884 02:23:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:19.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.884 --rc genhtml_branch_coverage=1 00:07:19.884 --rc genhtml_function_coverage=1 00:07:19.884 --rc genhtml_legend=1 00:07:19.884 --rc geninfo_all_blocks=1 00:07:19.884 --rc geninfo_unexecuted_blocks=1 00:07:19.884 00:07:19.884 ' 00:07:19.884 02:23:59 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:19.884 02:23:59 -- nvmf/common.sh@7 -- # uname -s 00:07:19.884 02:23:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.884 02:23:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.884 02:23:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.884 02:23:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.884 02:23:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.884 02:23:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.884 02:23:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.884 02:23:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.884 02:23:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.884 02:23:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.884 02:23:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:19.884 02:23:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:19.884 02:23:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.884 02:23:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.884 02:23:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:19.884 02:23:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:19.884 02:23:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.884 02:23:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.884 02:23:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.884 02:23:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.884 02:23:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.884 02:23:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.884 02:23:59 -- paths/export.sh@5 -- # export PATH 00:07:19.884 02:23:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.884 02:23:59 -- nvmf/common.sh@46 -- # : 0 00:07:19.884 02:23:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:19.884 02:23:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:19.884 02:23:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:19.884 02:23:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.884 02:23:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.884 02:23:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:19.884 02:23:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:19.884 02:23:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:19.884 02:23:59 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:19.884 02:23:59 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:19.884 02:23:59 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:19.884 02:23:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:19.884 02:23:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.884 02:23:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:19.884 02:23:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:19.884 02:23:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:19.884 02:23:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.884 02:23:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.884 02:23:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.884 02:23:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:19.884 02:23:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:19.884 02:23:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:19.884 02:23:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:19.884 02:23:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:19.884 02:23:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:19.884 02:23:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.884 02:23:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.884 02:23:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:19.884 02:23:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:19.884 02:23:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:19.884 02:23:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:19.884 02:23:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:19.884 02:23:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.884 02:23:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:19.884 02:23:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:19.884 02:23:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:19.884 02:23:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:19.884 02:23:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:19.884 02:23:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:19.884 Cannot find device "nvmf_tgt_br" 00:07:19.884 02:23:59 -- nvmf/common.sh@154 -- # true 00:07:19.884 02:23:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:19.884 Cannot find device "nvmf_tgt_br2" 00:07:19.884 02:23:59 -- nvmf/common.sh@155 -- # true 00:07:19.884 02:23:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:19.884 02:23:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:19.884 Cannot find device "nvmf_tgt_br" 00:07:19.884 02:23:59 -- nvmf/common.sh@157 -- # true 00:07:19.884 02:23:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:19.884 Cannot find device "nvmf_tgt_br2" 00:07:19.884 02:23:59 -- nvmf/common.sh@158 -- # true 00:07:19.884 02:23:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:19.884 02:23:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:19.884 02:23:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:19.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.884 02:23:59 -- nvmf/common.sh@161 -- # true 00:07:19.884 02:23:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:19.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:19.884 02:23:59 -- nvmf/common.sh@162 -- # true 00:07:19.884 02:23:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:19.884 02:23:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:19.884 02:23:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:19.884 02:23:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:19.884 02:23:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:19.884 02:23:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:19.884 02:23:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:19.884 02:23:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:19.885 02:23:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:19.885 02:23:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:19.885 02:23:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:19.885 02:23:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:19.885 02:23:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:19.885 02:23:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:19.885 02:23:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:19.885 02:23:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:19.885 02:23:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:19.885 02:23:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:19.885 02:23:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:19.885 02:23:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:19.885 02:23:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:19.885 02:23:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:19.885 02:23:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:19.885 02:23:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:19.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:07:19.885 00:07:19.885 --- 10.0.0.2 ping statistics --- 00:07:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.885 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:19.885 02:23:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:19.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:19.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:07:19.885 00:07:19.885 --- 10.0.0.3 ping statistics --- 00:07:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.885 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:19.885 02:23:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:19.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:19.885 00:07:19.885 --- 10.0.0.1 ping statistics --- 00:07:19.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.885 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:19.885 02:23:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.885 02:23:59 -- nvmf/common.sh@421 -- # return 0 00:07:19.885 02:23:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:19.885 02:23:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.885 02:23:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:19.885 02:23:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:19.885 02:23:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.885 02:23:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:19.885 02:23:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:19.885 02:23:59 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:19.885 02:23:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:19.885 02:23:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.885 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 ************************************ 00:07:19.885 START TEST nvmf_filesystem_no_in_capsule 00:07:19.885 ************************************ 00:07:19.885 02:23:59 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:07:19.885 02:23:59 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:19.885 02:23:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:19.885 02:23:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:19.885 02:23:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:19.885 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 02:23:59 -- nvmf/common.sh@469 -- # nvmfpid=60573 00:07:19.885 02:23:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:19.885 02:23:59 -- nvmf/common.sh@470 -- # waitforlisten 60573 00:07:19.885 02:23:59 -- common/autotest_common.sh@829 -- # '[' -z 60573 ']' 00:07:19.885 02:23:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.885 02:23:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.885 02:23:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.885 02:23:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.885 02:23:59 -- common/autotest_common.sh@10 -- # set +x 00:07:19.885 [2024-11-21 02:23:59.732732] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.885 [2024-11-21 02:23:59.732868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.885 [2024-11-21 02:23:59.874679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.885 [2024-11-21 02:23:59.986671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.885 [2024-11-21 02:23:59.986871] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.885 [2024-11-21 02:23:59.986889] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.885 [2024-11-21 02:23:59.986900] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.885 [2024-11-21 02:23:59.987089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.885 [2024-11-21 02:23:59.987251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.885 [2024-11-21 02:23:59.988027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.885 [2024-11-21 02:23:59.988040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.143 02:24:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.144 02:24:00 -- common/autotest_common.sh@862 -- # return 0 00:07:20.144 02:24:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:20.144 02:24:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:20.144 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.144 02:24:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.144 02:24:00 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:20.144 02:24:00 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:20.144 02:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.144 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.144 [2024-11-21 02:24:00.742854] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.144 02:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.144 02:24:00 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:20.144 02:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.144 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.402 Malloc1 00:07:20.402 02:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.402 02:24:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:20.402 02:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.402 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.402 02:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.402 02:24:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:20.402 02:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.402 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.402 02:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.402 02:24:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.402 02:24:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.402 02:24:00 -- common/autotest_common.sh@10 -- # set +x 00:07:20.402 [2024-11-21 02:24:00.994907] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.402 02:24:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.402 02:24:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:20.402 02:24:01 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:20.402 02:24:01 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:20.402 02:24:01 -- common/autotest_common.sh@1369 -- # local bs 00:07:20.402 02:24:01 -- common/autotest_common.sh@1370 -- # local nb 00:07:20.402 02:24:01 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:20.402 02:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.402 02:24:01 -- common/autotest_common.sh@10 -- # set +x 00:07:20.402 02:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.402 02:24:01 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:20.402 { 00:07:20.402 "aliases": [ 00:07:20.402 "452a1d34-c759-4d9a-b19d-48e3ccb57510" 00:07:20.402 ], 00:07:20.402 "assigned_rate_limits": { 00:07:20.402 "r_mbytes_per_sec": 0, 00:07:20.402 "rw_ios_per_sec": 0, 00:07:20.402 "rw_mbytes_per_sec": 0, 00:07:20.402 "w_mbytes_per_sec": 0 00:07:20.402 }, 00:07:20.402 "block_size": 512, 00:07:20.402 "claim_type": "exclusive_write", 00:07:20.402 "claimed": true, 00:07:20.402 "driver_specific": {}, 00:07:20.402 "memory_domains": [ 00:07:20.402 { 00:07:20.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.402 "dma_device_type": 2 00:07:20.402 } 00:07:20.402 ], 00:07:20.402 "name": "Malloc1", 00:07:20.402 "num_blocks": 1048576, 00:07:20.402 "product_name": "Malloc disk", 00:07:20.402 "supported_io_types": { 00:07:20.402 "abort": true, 00:07:20.402 "compare": false, 00:07:20.402 "compare_and_write": false, 00:07:20.402 "flush": true, 00:07:20.402 "nvme_admin": false, 00:07:20.402 "nvme_io": false, 00:07:20.402 "read": true, 00:07:20.402 "reset": true, 00:07:20.402 "unmap": true, 00:07:20.402 "write": true, 00:07:20.402 "write_zeroes": true 00:07:20.402 }, 00:07:20.402 "uuid": "452a1d34-c759-4d9a-b19d-48e3ccb57510", 00:07:20.402 "zoned": false 00:07:20.402 } 00:07:20.402 ]' 00:07:20.402 02:24:01 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:20.661 02:24:01 -- common/autotest_common.sh@1372 -- # bs=512 00:07:20.661 02:24:01 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:20.661 02:24:01 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:20.661 02:24:01 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:20.661 02:24:01 -- common/autotest_common.sh@1377 -- # echo 512 00:07:20.661 02:24:01 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:20.661 02:24:01 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.661 02:24:01 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.661 02:24:01 -- common/autotest_common.sh@1187 -- # local i=0 00:07:20.661 02:24:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.661 02:24:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:20.661 02:24:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:23.225 02:24:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:23.225 02:24:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:23.226 02:24:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:23.226 02:24:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:23.226 02:24:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:23.226 02:24:03 -- common/autotest_common.sh@1197 -- # return 0 00:07:23.226 02:24:03 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:23.226 02:24:03 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:23.226 02:24:03 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:23.226 02:24:03 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:23.226 02:24:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:23.226 02:24:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:23.226 02:24:03 -- setup/common.sh@80 -- # echo 536870912 00:07:23.226 02:24:03 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:23.226 02:24:03 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:23.226 02:24:03 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:23.226 02:24:03 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:23.226 02:24:03 -- target/filesystem.sh@69 -- # partprobe 00:07:23.226 02:24:03 -- target/filesystem.sh@70 -- # sleep 1 00:07:24.162 02:24:04 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:24.162 02:24:04 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:24.162 02:24:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:24.162 02:24:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.162 02:24:04 -- common/autotest_common.sh@10 -- # set +x 00:07:24.162 ************************************ 00:07:24.162 START TEST filesystem_ext4 00:07:24.162 ************************************ 00:07:24.162 02:24:04 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:24.162 02:24:04 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:24.162 02:24:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.162 02:24:04 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:24.162 02:24:04 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:24.162 02:24:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:24.162 02:24:04 -- common/autotest_common.sh@914 -- # local i=0 00:07:24.162 02:24:04 -- common/autotest_common.sh@915 -- # local force 00:07:24.162 02:24:04 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:24.162 02:24:04 -- common/autotest_common.sh@918 -- # force=-F 00:07:24.162 02:24:04 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:24.162 mke2fs 1.47.0 (5-Feb-2023) 00:07:24.162 Discarding device blocks: 0/522240 done 00:07:24.162 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:24.162 Filesystem UUID: 24ca57e8-599c-442c-be89-89a6bf9760d5 00:07:24.162 Superblock backups stored on blocks: 00:07:24.162 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:24.162 00:07:24.162 Allocating group tables: 0/64 done 00:07:24.162 Writing inode tables: 0/64 done 00:07:24.162 Creating journal (8192 blocks): done 00:07:24.162 Writing superblocks and filesystem accounting information: 0/64 done 00:07:24.162 00:07:24.162 02:24:04 -- common/autotest_common.sh@931 -- # return 0 00:07:24.162 02:24:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.428 02:24:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.688 02:24:10 -- target/filesystem.sh@25 -- # sync 00:07:29.688 02:24:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.688 02:24:10 -- target/filesystem.sh@27 -- # sync 00:07:29.688 02:24:10 -- target/filesystem.sh@29 -- # i=0 00:07:29.688 02:24:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.688 02:24:10 -- target/filesystem.sh@37 -- # kill -0 60573 00:07:29.688 02:24:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.688 02:24:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.688 02:24:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.688 02:24:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.688 00:07:29.688 real 0m5.660s 00:07:29.688 user 0m0.027s 00:07:29.688 sys 0m0.065s 00:07:29.688 02:24:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.688 02:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:29.688 ************************************ 00:07:29.688 END TEST filesystem_ext4 00:07:29.688 ************************************ 00:07:29.688 02:24:10 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:29.688 02:24:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:29.688 02:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.688 02:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:29.688 ************************************ 00:07:29.688 START TEST filesystem_btrfs 00:07:29.688 ************************************ 00:07:29.688 02:24:10 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:29.688 02:24:10 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:29.688 02:24:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.688 02:24:10 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:29.688 02:24:10 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:29.688 02:24:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:29.688 02:24:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:29.688 02:24:10 -- common/autotest_common.sh@915 -- # local force 00:07:29.688 02:24:10 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:29.688 02:24:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:29.688 02:24:10 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.947 btrfs-progs v6.8.1 00:07:29.947 See https://btrfs.readthedocs.io for more information. 00:07:29.947 00:07:29.947 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.947 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.947 this does not affect your deployments: 00:07:29.947 - DUP for metadata (-m dup) 00:07:29.947 - enabled no-holes (-O no-holes) 00:07:29.947 - enabled free-space-tree (-R free-space-tree) 00:07:29.947 00:07:29.947 Label: (null) 00:07:29.947 UUID: fb1e6f54-b354-4853-acfe-f2ff76b82d46 00:07:29.947 Node size: 16384 00:07:29.947 Sector size: 4096 (CPU page size: 4096) 00:07:29.947 Filesystem size: 510.00MiB 00:07:29.947 Block group profiles: 00:07:29.947 Data: single 8.00MiB 00:07:29.947 Metadata: DUP 32.00MiB 00:07:29.947 System: DUP 8.00MiB 00:07:29.947 SSD detected: yes 00:07:29.947 Zoned device: no 00:07:29.947 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.947 Checksum: crc32c 00:07:29.947 Number of devices: 1 00:07:29.947 Devices: 00:07:29.947 ID SIZE PATH 00:07:29.947 1 510.00MiB /dev/nvme0n1p1 00:07:29.947 00:07:29.947 02:24:10 -- common/autotest_common.sh@931 -- # return 0 00:07:29.947 02:24:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.947 02:24:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.947 02:24:10 -- target/filesystem.sh@25 -- # sync 00:07:29.947 02:24:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.947 02:24:10 -- target/filesystem.sh@27 -- # sync 00:07:29.947 02:24:10 -- target/filesystem.sh@29 -- # i=0 00:07:29.947 02:24:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.947 02:24:10 -- target/filesystem.sh@37 -- # kill -0 60573 00:07:29.947 02:24:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.947 02:24:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.947 02:24:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.947 02:24:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.947 00:07:29.947 real 0m0.231s 00:07:29.947 user 0m0.025s 00:07:29.947 sys 0m0.063s 00:07:29.947 02:24:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.947 02:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:29.947 ************************************ 00:07:29.947 END TEST filesystem_btrfs 00:07:29.947 ************************************ 00:07:29.947 02:24:10 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.947 02:24:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:29.947 02:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.947 02:24:10 -- common/autotest_common.sh@10 -- # set +x 00:07:29.947 ************************************ 00:07:29.947 START TEST filesystem_xfs 00:07:29.947 ************************************ 00:07:29.947 02:24:10 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.947 02:24:10 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.947 02:24:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.947 02:24:10 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.947 02:24:10 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:29.947 02:24:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:29.947 02:24:10 -- common/autotest_common.sh@914 -- # local i=0 00:07:29.947 02:24:10 -- common/autotest_common.sh@915 -- # local force 00:07:29.947 02:24:10 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:29.947 02:24:10 -- common/autotest_common.sh@920 -- # force=-f 00:07:29.947 02:24:10 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.206 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.206 = sectsz=512 attr=2, projid32bit=1 00:07:30.206 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.206 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.206 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.206 = sunit=0 swidth=0 blks 00:07:30.206 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.206 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.206 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.206 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.774 Discarding blocks...Done. 00:07:30.774 02:24:11 -- common/autotest_common.sh@931 -- # return 0 00:07:30.774 02:24:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.305 02:24:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.305 02:24:13 -- target/filesystem.sh@25 -- # sync 00:07:33.305 02:24:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.305 02:24:13 -- target/filesystem.sh@27 -- # sync 00:07:33.305 02:24:13 -- target/filesystem.sh@29 -- # i=0 00:07:33.305 02:24:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.305 02:24:13 -- target/filesystem.sh@37 -- # kill -0 60573 00:07:33.305 02:24:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.305 02:24:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.305 02:24:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.305 02:24:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.305 00:07:33.305 real 0m3.196s 00:07:33.305 user 0m0.018s 00:07:33.305 sys 0m0.065s 00:07:33.305 02:24:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.305 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:07:33.305 ************************************ 00:07:33.305 END TEST filesystem_xfs 00:07:33.305 ************************************ 00:07:33.305 02:24:13 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:33.305 02:24:13 -- target/filesystem.sh@93 -- # sync 00:07:33.305 02:24:13 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.305 02:24:13 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.305 02:24:13 -- common/autotest_common.sh@1208 -- # local i=0 00:07:33.305 02:24:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:33.305 02:24:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.305 02:24:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:33.305 02:24:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.305 02:24:13 -- common/autotest_common.sh@1220 -- # return 0 00:07:33.305 02:24:13 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.305 02:24:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.305 02:24:13 -- common/autotest_common.sh@10 -- # set +x 00:07:33.305 02:24:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.305 02:24:13 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:33.305 02:24:13 -- target/filesystem.sh@101 -- # killprocess 60573 00:07:33.305 02:24:13 -- common/autotest_common.sh@936 -- # '[' -z 60573 ']' 00:07:33.305 02:24:13 -- common/autotest_common.sh@940 -- # kill -0 60573 00:07:33.305 02:24:13 -- common/autotest_common.sh@941 -- # uname 00:07:33.305 02:24:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:33.305 02:24:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60573 00:07:33.305 02:24:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:33.305 02:24:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:33.305 02:24:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60573' 00:07:33.305 killing process with pid 60573 00:07:33.305 02:24:13 -- common/autotest_common.sh@955 -- # kill 60573 00:07:33.305 02:24:13 -- common/autotest_common.sh@960 -- # wait 60573 00:07:33.898 02:24:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:33.898 00:07:33.898 real 0m14.825s 00:07:33.898 user 0m56.837s 00:07:33.898 sys 0m1.755s 00:07:33.898 02:24:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.898 02:24:14 -- common/autotest_common.sh@10 -- # set +x 00:07:33.898 ************************************ 00:07:33.898 END TEST nvmf_filesystem_no_in_capsule 00:07:33.898 ************************************ 00:07:33.898 02:24:14 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:33.898 02:24:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:33.898 02:24:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.898 02:24:14 -- common/autotest_common.sh@10 -- # set +x 00:07:34.156 ************************************ 00:07:34.157 START TEST nvmf_filesystem_in_capsule 00:07:34.157 ************************************ 00:07:34.157 02:24:14 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:07:34.157 02:24:14 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:34.157 02:24:14 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.157 02:24:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:34.157 02:24:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.157 02:24:14 -- common/autotest_common.sh@10 -- # set +x 00:07:34.157 02:24:14 -- nvmf/common.sh@469 -- # nvmfpid=60945 00:07:34.157 02:24:14 -- nvmf/common.sh@470 -- # waitforlisten 60945 00:07:34.157 02:24:14 -- common/autotest_common.sh@829 -- # '[' -z 60945 ']' 00:07:34.157 02:24:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.157 02:24:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.157 02:24:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.157 02:24:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.157 02:24:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.157 02:24:14 -- common/autotest_common.sh@10 -- # set +x 00:07:34.157 [2024-11-21 02:24:14.609018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.157 [2024-11-21 02:24:14.609116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.157 [2024-11-21 02:24:14.744383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.415 [2024-11-21 02:24:14.840374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.415 [2024-11-21 02:24:14.840510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.415 [2024-11-21 02:24:14.840522] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.415 [2024-11-21 02:24:14.840530] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.415 [2024-11-21 02:24:14.840697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.415 [2024-11-21 02:24:14.840872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.415 [2024-11-21 02:24:14.841393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.415 [2024-11-21 02:24:14.841467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.981 02:24:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.981 02:24:15 -- common/autotest_common.sh@862 -- # return 0 00:07:34.981 02:24:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:34.981 02:24:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.981 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:34.981 02:24:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.981 02:24:15 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:34.981 02:24:15 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:34.981 02:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.981 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:34.981 [2024-11-21 02:24:15.578409] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.981 02:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.981 02:24:15 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:34.981 02:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.981 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:35.239 Malloc1 00:07:35.239 02:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.239 02:24:15 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.239 02:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.239 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:35.239 02:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.239 02:24:15 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.239 02:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.239 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:35.239 02:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.239 02:24:15 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.239 02:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.239 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:35.239 [2024-11-21 02:24:15.812628] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.239 02:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.239 02:24:15 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:35.239 02:24:15 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:07:35.239 02:24:15 -- common/autotest_common.sh@1368 -- # local bdev_info 00:07:35.239 02:24:15 -- common/autotest_common.sh@1369 -- # local bs 00:07:35.239 02:24:15 -- common/autotest_common.sh@1370 -- # local nb 00:07:35.239 02:24:15 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:35.239 02:24:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.239 02:24:15 -- common/autotest_common.sh@10 -- # set +x 00:07:35.239 02:24:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.239 02:24:15 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:07:35.240 { 00:07:35.240 "aliases": [ 00:07:35.240 "d20c4595-edaf-48e2-8bf5-878937d18a8a" 00:07:35.240 ], 00:07:35.240 "assigned_rate_limits": { 00:07:35.240 "r_mbytes_per_sec": 0, 00:07:35.240 "rw_ios_per_sec": 0, 00:07:35.240 "rw_mbytes_per_sec": 0, 00:07:35.240 "w_mbytes_per_sec": 0 00:07:35.240 }, 00:07:35.240 "block_size": 512, 00:07:35.240 "claim_type": "exclusive_write", 00:07:35.240 "claimed": true, 00:07:35.240 "driver_specific": {}, 00:07:35.240 "memory_domains": [ 00:07:35.240 { 00:07:35.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.240 "dma_device_type": 2 00:07:35.240 } 00:07:35.240 ], 00:07:35.240 "name": "Malloc1", 00:07:35.240 "num_blocks": 1048576, 00:07:35.240 "product_name": "Malloc disk", 00:07:35.240 "supported_io_types": { 00:07:35.240 "abort": true, 00:07:35.240 "compare": false, 00:07:35.240 "compare_and_write": false, 00:07:35.240 "flush": true, 00:07:35.240 "nvme_admin": false, 00:07:35.240 "nvme_io": false, 00:07:35.240 "read": true, 00:07:35.240 "reset": true, 00:07:35.240 "unmap": true, 00:07:35.240 "write": true, 00:07:35.240 "write_zeroes": true 00:07:35.240 }, 00:07:35.240 "uuid": "d20c4595-edaf-48e2-8bf5-878937d18a8a", 00:07:35.240 "zoned": false 00:07:35.240 } 00:07:35.240 ]' 00:07:35.240 02:24:15 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:07:35.240 02:24:15 -- common/autotest_common.sh@1372 -- # bs=512 00:07:35.240 02:24:15 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:07:35.498 02:24:15 -- common/autotest_common.sh@1373 -- # nb=1048576 00:07:35.498 02:24:15 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:07:35.498 02:24:15 -- common/autotest_common.sh@1377 -- # echo 512 00:07:35.498 02:24:15 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:35.498 02:24:15 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.498 02:24:16 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:35.498 02:24:16 -- common/autotest_common.sh@1187 -- # local i=0 00:07:35.498 02:24:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.498 02:24:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:07:35.498 02:24:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:07:38.045 02:24:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:07:38.045 02:24:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:07:38.045 02:24:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.045 02:24:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:07:38.045 02:24:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.045 02:24:18 -- common/autotest_common.sh@1197 -- # return 0 00:07:38.045 02:24:18 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:38.045 02:24:18 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:38.045 02:24:18 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:38.045 02:24:18 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:38.045 02:24:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:38.045 02:24:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:38.045 02:24:18 -- setup/common.sh@80 -- # echo 536870912 00:07:38.045 02:24:18 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:38.045 02:24:18 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:38.045 02:24:18 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:38.045 02:24:18 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.045 02:24:18 -- target/filesystem.sh@69 -- # partprobe 00:07:38.045 02:24:18 -- target/filesystem.sh@70 -- # sleep 1 00:07:38.611 02:24:19 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:38.611 02:24:19 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:38.611 02:24:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:38.611 02:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.611 02:24:19 -- common/autotest_common.sh@10 -- # set +x 00:07:38.869 ************************************ 00:07:38.869 START TEST filesystem_in_capsule_ext4 00:07:38.869 ************************************ 00:07:38.869 02:24:19 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:38.869 02:24:19 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:38.869 02:24:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.869 02:24:19 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:38.869 02:24:19 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:38.869 02:24:19 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:38.869 02:24:19 -- common/autotest_common.sh@914 -- # local i=0 00:07:38.869 02:24:19 -- common/autotest_common.sh@915 -- # local force 00:07:38.869 02:24:19 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:38.869 02:24:19 -- common/autotest_common.sh@918 -- # force=-F 00:07:38.869 02:24:19 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:38.869 mke2fs 1.47.0 (5-Feb-2023) 00:07:38.869 Discarding device blocks: 0/522240 done 00:07:38.869 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:38.869 Filesystem UUID: c45114a6-141c-4569-b823-0c4bb1be4d55 00:07:38.869 Superblock backups stored on blocks: 00:07:38.869 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:38.869 00:07:38.869 Allocating group tables: 0/64 done 00:07:38.869 Writing inode tables: 0/64 done 00:07:38.869 Creating journal (8192 blocks): done 00:07:38.869 Writing superblocks and filesystem accounting information: 0/64 done 00:07:38.869 00:07:38.869 02:24:19 -- common/autotest_common.sh@931 -- # return 0 00:07:38.869 02:24:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.428 02:24:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.428 02:24:24 -- target/filesystem.sh@25 -- # sync 00:07:45.428 02:24:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.428 02:24:24 -- target/filesystem.sh@27 -- # sync 00:07:45.428 02:24:24 -- target/filesystem.sh@29 -- # i=0 00:07:45.428 02:24:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.428 02:24:24 -- target/filesystem.sh@37 -- # kill -0 60945 00:07:45.428 02:24:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.428 02:24:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.428 02:24:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.428 02:24:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.428 ************************************ 00:07:45.428 END TEST filesystem_in_capsule_ext4 00:07:45.428 ************************************ 00:07:45.428 00:07:45.428 real 0m5.688s 00:07:45.428 user 0m0.025s 00:07:45.428 sys 0m0.067s 00:07:45.428 02:24:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.428 02:24:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.428 02:24:24 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:45.428 02:24:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.428 02:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.428 02:24:24 -- common/autotest_common.sh@10 -- # set +x 00:07:45.428 ************************************ 00:07:45.428 START TEST filesystem_in_capsule_btrfs 00:07:45.429 ************************************ 00:07:45.429 02:24:24 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:45.429 02:24:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:45.429 02:24:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.429 02:24:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:45.429 02:24:24 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:45.429 02:24:24 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:45.429 02:24:24 -- common/autotest_common.sh@914 -- # local i=0 00:07:45.429 02:24:24 -- common/autotest_common.sh@915 -- # local force 00:07:45.429 02:24:24 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:45.429 02:24:24 -- common/autotest_common.sh@920 -- # force=-f 00:07:45.429 02:24:24 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:45.429 btrfs-progs v6.8.1 00:07:45.429 See https://btrfs.readthedocs.io for more information. 00:07:45.429 00:07:45.429 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:45.429 NOTE: several default settings have changed in version 5.15, please make sure 00:07:45.429 this does not affect your deployments: 00:07:45.429 - DUP for metadata (-m dup) 00:07:45.429 - enabled no-holes (-O no-holes) 00:07:45.429 - enabled free-space-tree (-R free-space-tree) 00:07:45.429 00:07:45.429 Label: (null) 00:07:45.429 UUID: 1ab1e2b5-56a4-4329-84eb-8b75fdfcfa01 00:07:45.429 Node size: 16384 00:07:45.429 Sector size: 4096 (CPU page size: 4096) 00:07:45.429 Filesystem size: 510.00MiB 00:07:45.429 Block group profiles: 00:07:45.429 Data: single 8.00MiB 00:07:45.429 Metadata: DUP 32.00MiB 00:07:45.429 System: DUP 8.00MiB 00:07:45.429 SSD detected: yes 00:07:45.429 Zoned device: no 00:07:45.429 Features: extref, skinny-metadata, no-holes, free-space-tree 00:07:45.429 Checksum: crc32c 00:07:45.429 Number of devices: 1 00:07:45.429 Devices: 00:07:45.429 ID SIZE PATH 00:07:45.429 1 510.00MiB /dev/nvme0n1p1 00:07:45.429 00:07:45.429 02:24:25 -- common/autotest_common.sh@931 -- # return 0 00:07:45.429 02:24:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.429 02:24:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.429 02:24:25 -- target/filesystem.sh@25 -- # sync 00:07:45.429 02:24:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.429 02:24:25 -- target/filesystem.sh@27 -- # sync 00:07:45.429 02:24:25 -- target/filesystem.sh@29 -- # i=0 00:07:45.429 02:24:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.429 02:24:25 -- target/filesystem.sh@37 -- # kill -0 60945 00:07:45.429 02:24:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.429 02:24:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.429 02:24:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.429 02:24:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.429 ************************************ 00:07:45.429 END TEST filesystem_in_capsule_btrfs 00:07:45.429 ************************************ 00:07:45.429 00:07:45.429 real 0m0.273s 00:07:45.429 user 0m0.021s 00:07:45.429 sys 0m0.062s 00:07:45.429 02:24:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.429 02:24:25 -- common/autotest_common.sh@10 -- # set +x 00:07:45.429 02:24:25 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:45.429 02:24:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:45.429 02:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.429 02:24:25 -- common/autotest_common.sh@10 -- # set +x 00:07:45.429 ************************************ 00:07:45.429 START TEST filesystem_in_capsule_xfs 00:07:45.429 ************************************ 00:07:45.429 02:24:25 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:07:45.429 02:24:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:45.429 02:24:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.429 02:24:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:45.429 02:24:25 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:45.429 02:24:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:45.429 02:24:25 -- common/autotest_common.sh@914 -- # local i=0 00:07:45.429 02:24:25 -- common/autotest_common.sh@915 -- # local force 00:07:45.429 02:24:25 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:45.429 02:24:25 -- common/autotest_common.sh@920 -- # force=-f 00:07:45.429 02:24:25 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:45.429 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:45.429 = sectsz=512 attr=2, projid32bit=1 00:07:45.429 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:45.429 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:45.429 data = bsize=4096 blocks=130560, imaxpct=25 00:07:45.429 = sunit=0 swidth=0 blks 00:07:45.429 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:45.429 log =internal log bsize=4096 blocks=16384, version=2 00:07:45.429 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:45.429 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.687 Discarding blocks...Done. 00:07:45.687 02:24:26 -- common/autotest_common.sh@931 -- # return 0 00:07:45.687 02:24:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.586 02:24:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.586 02:24:27 -- target/filesystem.sh@25 -- # sync 00:07:47.586 02:24:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.586 02:24:27 -- target/filesystem.sh@27 -- # sync 00:07:47.586 02:24:28 -- target/filesystem.sh@29 -- # i=0 00:07:47.586 02:24:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.586 02:24:28 -- target/filesystem.sh@37 -- # kill -0 60945 00:07:47.586 02:24:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.586 02:24:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.586 02:24:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.586 02:24:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.586 ************************************ 00:07:47.586 END TEST filesystem_in_capsule_xfs 00:07:47.586 ************************************ 00:07:47.586 00:07:47.586 real 0m2.716s 00:07:47.586 user 0m0.019s 00:07:47.586 sys 0m0.061s 00:07:47.586 02:24:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:47.586 02:24:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.586 02:24:28 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:47.586 02:24:28 -- target/filesystem.sh@93 -- # sync 00:07:47.586 02:24:28 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:47.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.586 02:24:28 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:47.586 02:24:28 -- common/autotest_common.sh@1208 -- # local i=0 00:07:47.586 02:24:28 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:07:47.586 02:24:28 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.586 02:24:28 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.586 02:24:28 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:07:47.586 02:24:28 -- common/autotest_common.sh@1220 -- # return 0 00:07:47.586 02:24:28 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.586 02:24:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.586 02:24:28 -- common/autotest_common.sh@10 -- # set +x 00:07:47.586 02:24:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.586 02:24:28 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:47.586 02:24:28 -- target/filesystem.sh@101 -- # killprocess 60945 00:07:47.586 02:24:28 -- common/autotest_common.sh@936 -- # '[' -z 60945 ']' 00:07:47.586 02:24:28 -- common/autotest_common.sh@940 -- # kill -0 60945 00:07:47.586 02:24:28 -- common/autotest_common.sh@941 -- # uname 00:07:47.586 02:24:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:47.586 02:24:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60945 00:07:47.845 killing process with pid 60945 00:07:47.845 02:24:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:47.845 02:24:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:47.845 02:24:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60945' 00:07:47.845 02:24:28 -- common/autotest_common.sh@955 -- # kill 60945 00:07:47.845 02:24:28 -- common/autotest_common.sh@960 -- # wait 60945 00:07:48.412 ************************************ 00:07:48.412 END TEST nvmf_filesystem_in_capsule 00:07:48.412 ************************************ 00:07:48.412 02:24:28 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.412 00:07:48.412 real 0m14.281s 00:07:48.412 user 0m54.782s 00:07:48.412 sys 0m1.672s 00:07:48.412 02:24:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.412 02:24:28 -- common/autotest_common.sh@10 -- # set +x 00:07:48.412 02:24:28 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:48.412 02:24:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:48.412 02:24:28 -- nvmf/common.sh@116 -- # sync 00:07:48.412 02:24:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:48.412 02:24:28 -- nvmf/common.sh@119 -- # set +e 00:07:48.412 02:24:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:48.412 02:24:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:48.412 rmmod nvme_tcp 00:07:48.412 rmmod nvme_fabrics 00:07:48.412 rmmod nvme_keyring 00:07:48.412 02:24:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:48.412 02:24:28 -- nvmf/common.sh@123 -- # set -e 00:07:48.412 02:24:28 -- nvmf/common.sh@124 -- # return 0 00:07:48.412 02:24:28 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:48.412 02:24:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:48.412 02:24:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:48.412 02:24:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:48.412 02:24:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.412 02:24:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:48.412 02:24:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.412 02:24:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.412 02:24:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.412 02:24:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:48.412 ************************************ 00:07:48.412 END TEST nvmf_filesystem 00:07:48.412 ************************************ 00:07:48.412 00:07:48.412 real 0m30.133s 00:07:48.412 user 1m52.012s 00:07:48.412 sys 0m3.872s 00:07:48.412 02:24:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.412 02:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.412 02:24:29 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.412 02:24:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:48.412 02:24:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.412 02:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.412 ************************************ 00:07:48.412 START TEST nvmf_discovery 00:07:48.412 ************************************ 00:07:48.412 02:24:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.672 * Looking for test storage... 00:07:48.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.672 02:24:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:48.672 02:24:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:48.672 02:24:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:48.672 02:24:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:48.672 02:24:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:48.672 02:24:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:48.672 02:24:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:48.672 02:24:29 -- scripts/common.sh@335 -- # IFS=.-: 00:07:48.672 02:24:29 -- scripts/common.sh@335 -- # read -ra ver1 00:07:48.672 02:24:29 -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.672 02:24:29 -- scripts/common.sh@336 -- # read -ra ver2 00:07:48.672 02:24:29 -- scripts/common.sh@337 -- # local 'op=<' 00:07:48.672 02:24:29 -- scripts/common.sh@339 -- # ver1_l=2 00:07:48.672 02:24:29 -- scripts/common.sh@340 -- # ver2_l=1 00:07:48.672 02:24:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:48.672 02:24:29 -- scripts/common.sh@343 -- # case "$op" in 00:07:48.672 02:24:29 -- scripts/common.sh@344 -- # : 1 00:07:48.672 02:24:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:48.672 02:24:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.672 02:24:29 -- scripts/common.sh@364 -- # decimal 1 00:07:48.672 02:24:29 -- scripts/common.sh@352 -- # local d=1 00:07:48.672 02:24:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.672 02:24:29 -- scripts/common.sh@354 -- # echo 1 00:07:48.672 02:24:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:48.672 02:24:29 -- scripts/common.sh@365 -- # decimal 2 00:07:48.672 02:24:29 -- scripts/common.sh@352 -- # local d=2 00:07:48.672 02:24:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.672 02:24:29 -- scripts/common.sh@354 -- # echo 2 00:07:48.672 02:24:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:48.672 02:24:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:48.672 02:24:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:48.672 02:24:29 -- scripts/common.sh@367 -- # return 0 00:07:48.672 02:24:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.672 02:24:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.672 --rc genhtml_branch_coverage=1 00:07:48.672 --rc genhtml_function_coverage=1 00:07:48.672 --rc genhtml_legend=1 00:07:48.672 --rc geninfo_all_blocks=1 00:07:48.672 --rc geninfo_unexecuted_blocks=1 00:07:48.672 00:07:48.672 ' 00:07:48.672 02:24:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.672 --rc genhtml_branch_coverage=1 00:07:48.672 --rc genhtml_function_coverage=1 00:07:48.672 --rc genhtml_legend=1 00:07:48.672 --rc geninfo_all_blocks=1 00:07:48.672 --rc geninfo_unexecuted_blocks=1 00:07:48.672 00:07:48.672 ' 00:07:48.672 02:24:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.672 --rc genhtml_branch_coverage=1 00:07:48.672 --rc genhtml_function_coverage=1 00:07:48.672 --rc genhtml_legend=1 00:07:48.672 --rc geninfo_all_blocks=1 00:07:48.672 --rc geninfo_unexecuted_blocks=1 00:07:48.672 00:07:48.672 ' 00:07:48.672 02:24:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:48.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.672 --rc genhtml_branch_coverage=1 00:07:48.672 --rc genhtml_function_coverage=1 00:07:48.672 --rc genhtml_legend=1 00:07:48.672 --rc geninfo_all_blocks=1 00:07:48.672 --rc geninfo_unexecuted_blocks=1 00:07:48.672 00:07:48.672 ' 00:07:48.672 02:24:29 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.672 02:24:29 -- nvmf/common.sh@7 -- # uname -s 00:07:48.672 02:24:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.672 02:24:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.672 02:24:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.672 02:24:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.672 02:24:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.672 02:24:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.672 02:24:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.672 02:24:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.672 02:24:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.672 02:24:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.672 02:24:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:48.672 02:24:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:48.672 02:24:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.672 02:24:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.672 02:24:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.672 02:24:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.672 02:24:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.672 02:24:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.672 02:24:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.672 02:24:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.672 02:24:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.672 02:24:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.672 02:24:29 -- paths/export.sh@5 -- # export PATH 00:07:48.672 02:24:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.672 02:24:29 -- nvmf/common.sh@46 -- # : 0 00:07:48.672 02:24:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.672 02:24:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.672 02:24:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.672 02:24:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.672 02:24:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.672 02:24:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.672 02:24:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.672 02:24:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.672 02:24:29 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:48.672 02:24:29 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:48.672 02:24:29 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:48.672 02:24:29 -- target/discovery.sh@15 -- # hash nvme 00:07:48.672 02:24:29 -- target/discovery.sh@20 -- # nvmftestinit 00:07:48.672 02:24:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:48.672 02:24:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.672 02:24:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:48.672 02:24:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:48.672 02:24:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:48.672 02:24:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.672 02:24:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.672 02:24:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.672 02:24:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:48.672 02:24:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:48.672 02:24:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:48.672 02:24:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:48.672 02:24:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:48.672 02:24:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:48.672 02:24:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.672 02:24:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.672 02:24:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:48.672 02:24:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:48.672 02:24:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.672 02:24:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.672 02:24:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.672 02:24:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.672 02:24:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.672 02:24:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.672 02:24:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.672 02:24:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.672 02:24:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:48.672 02:24:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:48.672 Cannot find device "nvmf_tgt_br" 00:07:48.672 02:24:29 -- nvmf/common.sh@154 -- # true 00:07:48.672 02:24:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.672 Cannot find device "nvmf_tgt_br2" 00:07:48.673 02:24:29 -- nvmf/common.sh@155 -- # true 00:07:48.673 02:24:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:48.673 02:24:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:48.673 Cannot find device "nvmf_tgt_br" 00:07:48.673 02:24:29 -- nvmf/common.sh@157 -- # true 00:07:48.673 02:24:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:48.932 Cannot find device "nvmf_tgt_br2" 00:07:48.932 02:24:29 -- nvmf/common.sh@158 -- # true 00:07:48.932 02:24:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:48.932 02:24:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:48.932 02:24:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.932 02:24:29 -- nvmf/common.sh@161 -- # true 00:07:48.932 02:24:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.932 02:24:29 -- nvmf/common.sh@162 -- # true 00:07:48.932 02:24:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.932 02:24:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.932 02:24:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.932 02:24:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.932 02:24:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:48.932 02:24:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:48.932 02:24:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:48.932 02:24:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:48.932 02:24:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:48.932 02:24:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:48.932 02:24:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:48.932 02:24:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:48.932 02:24:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:48.932 02:24:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:48.932 02:24:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:48.932 02:24:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:48.932 02:24:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:48.932 02:24:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:48.932 02:24:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.932 02:24:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.932 02:24:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:49.191 02:24:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:49.191 02:24:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:49.191 02:24:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:49.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:07:49.191 00:07:49.191 --- 10.0.0.2 ping statistics --- 00:07:49.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.191 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:07:49.191 02:24:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:49.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:49.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:07:49.191 00:07:49.191 --- 10.0.0.3 ping statistics --- 00:07:49.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.191 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:07:49.191 02:24:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:49.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:49.191 00:07:49.191 --- 10.0.0.1 ping statistics --- 00:07:49.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.191 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:49.191 02:24:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.191 02:24:29 -- nvmf/common.sh@421 -- # return 0 00:07:49.191 02:24:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:49.191 02:24:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.191 02:24:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:49.191 02:24:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:49.191 02:24:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.191 02:24:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:49.191 02:24:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:49.191 02:24:29 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:49.191 02:24:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:49.191 02:24:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.191 02:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:49.191 02:24:29 -- nvmf/common.sh@469 -- # nvmfpid=61495 00:07:49.191 02:24:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.191 02:24:29 -- nvmf/common.sh@470 -- # waitforlisten 61495 00:07:49.191 02:24:29 -- common/autotest_common.sh@829 -- # '[' -z 61495 ']' 00:07:49.191 02:24:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.191 02:24:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.191 02:24:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.191 02:24:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.191 02:24:29 -- common/autotest_common.sh@10 -- # set +x 00:07:49.191 [2024-11-21 02:24:29.692896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.191 [2024-11-21 02:24:29.692991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.450 [2024-11-21 02:24:29.835343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.450 [2024-11-21 02:24:29.955539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.450 [2024-11-21 02:24:29.956054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.450 [2024-11-21 02:24:29.956200] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.450 [2024-11-21 02:24:29.956375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.450 [2024-11-21 02:24:29.956654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.450 [2024-11-21 02:24:29.956788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.450 [2024-11-21 02:24:29.956871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.450 [2024-11-21 02:24:29.956863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.386 02:24:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.386 02:24:30 -- common/autotest_common.sh@862 -- # return 0 00:07:50.386 02:24:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:50.386 02:24:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:50.386 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 02:24:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.386 02:24:30 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.386 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 [2024-11-21 02:24:30.770430] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.386 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 02:24:30 -- target/discovery.sh@26 -- # seq 1 4 00:07:50.386 02:24:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:50.386 02:24:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:50.386 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 Null1 00:07:50.386 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 02:24:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.386 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 02:24:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:50.386 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 02:24:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 [2024-11-21 02:24:30.833412] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:50.387 02:24:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 Null2 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:50.387 02:24:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 Null3 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:50.387 02:24:30 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 Null4 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:50.387 02:24:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.387 02:24:30 -- common/autotest_common.sh@10 -- # set +x 00:07:50.387 02:24:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.387 02:24:30 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 4420 00:07:50.647 00:07:50.647 Discovery Log Number of Records 6, Generation counter 6 00:07:50.647 =====Discovery Log Entry 0====== 00:07:50.647 trtype: tcp 00:07:50.647 adrfam: ipv4 00:07:50.647 subtype: current discovery subsystem 00:07:50.647 treq: not required 00:07:50.647 portid: 0 00:07:50.647 trsvcid: 4420 00:07:50.647 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:50.647 traddr: 10.0.0.2 00:07:50.647 eflags: explicit discovery connections, duplicate discovery information 00:07:50.647 sectype: none 00:07:50.648 =====Discovery Log Entry 1====== 00:07:50.648 trtype: tcp 00:07:50.648 adrfam: ipv4 00:07:50.648 subtype: nvme subsystem 00:07:50.648 treq: not required 00:07:50.648 portid: 0 00:07:50.648 trsvcid: 4420 00:07:50.648 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:50.648 traddr: 10.0.0.2 00:07:50.648 eflags: none 00:07:50.648 sectype: none 00:07:50.648 =====Discovery Log Entry 2====== 00:07:50.648 trtype: tcp 00:07:50.648 adrfam: ipv4 00:07:50.648 subtype: nvme subsystem 00:07:50.648 treq: not required 00:07:50.648 portid: 0 00:07:50.648 trsvcid: 4420 00:07:50.648 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:50.648 traddr: 10.0.0.2 00:07:50.648 eflags: none 00:07:50.648 sectype: none 00:07:50.648 =====Discovery Log Entry 3====== 00:07:50.648 trtype: tcp 00:07:50.648 adrfam: ipv4 00:07:50.648 subtype: nvme subsystem 00:07:50.648 treq: not required 00:07:50.648 portid: 0 00:07:50.648 trsvcid: 4420 00:07:50.648 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:50.648 traddr: 10.0.0.2 00:07:50.648 eflags: none 00:07:50.648 sectype: none 00:07:50.648 =====Discovery Log Entry 4====== 00:07:50.648 trtype: tcp 00:07:50.648 adrfam: ipv4 00:07:50.648 subtype: nvme subsystem 00:07:50.648 treq: not required 00:07:50.648 portid: 0 00:07:50.648 trsvcid: 4420 00:07:50.648 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:50.648 traddr: 10.0.0.2 00:07:50.648 eflags: none 00:07:50.648 sectype: none 00:07:50.648 =====Discovery Log Entry 5====== 00:07:50.648 trtype: tcp 00:07:50.648 adrfam: ipv4 00:07:50.648 subtype: discovery subsystem referral 00:07:50.648 treq: not required 00:07:50.648 portid: 0 00:07:50.648 trsvcid: 4430 00:07:50.648 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:50.648 traddr: 10.0.0.2 00:07:50.648 eflags: none 00:07:50.648 sectype: none 00:07:50.648 Perform nvmf subsystem discovery via RPC 00:07:50.648 02:24:31 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:50.648 02:24:31 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:50.648 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.648 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.648 [2024-11-21 02:24:31.069565] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:50.648 [ 00:07:50.648 { 00:07:50.648 "allow_any_host": true, 00:07:50.648 "hosts": [], 00:07:50.648 "listen_addresses": [ 00:07:50.648 { 00:07:50.648 "adrfam": "IPv4", 00:07:50.648 "traddr": "10.0.0.2", 00:07:50.648 "transport": "TCP", 00:07:50.648 "trsvcid": "4420", 00:07:50.648 "trtype": "TCP" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:50.648 "subtype": "Discovery" 00:07:50.648 }, 00:07:50.648 { 00:07:50.648 "allow_any_host": true, 00:07:50.648 "hosts": [], 00:07:50.648 "listen_addresses": [ 00:07:50.648 { 00:07:50.648 "adrfam": "IPv4", 00:07:50.648 "traddr": "10.0.0.2", 00:07:50.648 "transport": "TCP", 00:07:50.648 "trsvcid": "4420", 00:07:50.648 "trtype": "TCP" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "max_cntlid": 65519, 00:07:50.648 "max_namespaces": 32, 00:07:50.648 "min_cntlid": 1, 00:07:50.648 "model_number": "SPDK bdev Controller", 00:07:50.648 "namespaces": [ 00:07:50.648 { 00:07:50.648 "bdev_name": "Null1", 00:07:50.648 "name": "Null1", 00:07:50.648 "nguid": "0141EC7F32244ADE81AF1574788E4A5A", 00:07:50.648 "nsid": 1, 00:07:50.648 "uuid": "0141ec7f-3224-4ade-81af-1574788e4a5a" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.648 "serial_number": "SPDK00000000000001", 00:07:50.648 "subtype": "NVMe" 00:07:50.648 }, 00:07:50.648 { 00:07:50.648 "allow_any_host": true, 00:07:50.648 "hosts": [], 00:07:50.648 "listen_addresses": [ 00:07:50.648 { 00:07:50.648 "adrfam": "IPv4", 00:07:50.648 "traddr": "10.0.0.2", 00:07:50.648 "transport": "TCP", 00:07:50.648 "trsvcid": "4420", 00:07:50.648 "trtype": "TCP" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "max_cntlid": 65519, 00:07:50.648 "max_namespaces": 32, 00:07:50.648 "min_cntlid": 1, 00:07:50.648 "model_number": "SPDK bdev Controller", 00:07:50.648 "namespaces": [ 00:07:50.648 { 00:07:50.648 "bdev_name": "Null2", 00:07:50.648 "name": "Null2", 00:07:50.648 "nguid": "69E34D5094DF444D91161C875118DA3C", 00:07:50.648 "nsid": 1, 00:07:50.648 "uuid": "69e34d50-94df-444d-9116-1c875118da3c" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:50.648 "serial_number": "SPDK00000000000002", 00:07:50.648 "subtype": "NVMe" 00:07:50.648 }, 00:07:50.648 { 00:07:50.648 "allow_any_host": true, 00:07:50.648 "hosts": [], 00:07:50.648 "listen_addresses": [ 00:07:50.648 { 00:07:50.648 "adrfam": "IPv4", 00:07:50.648 "traddr": "10.0.0.2", 00:07:50.648 "transport": "TCP", 00:07:50.648 "trsvcid": "4420", 00:07:50.648 "trtype": "TCP" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "max_cntlid": 65519, 00:07:50.648 "max_namespaces": 32, 00:07:50.648 "min_cntlid": 1, 00:07:50.648 "model_number": "SPDK bdev Controller", 00:07:50.648 "namespaces": [ 00:07:50.648 { 00:07:50.648 "bdev_name": "Null3", 00:07:50.648 "name": "Null3", 00:07:50.648 "nguid": "45CC23B3B9AF413DADE75BA3ED9C1379", 00:07:50.648 "nsid": 1, 00:07:50.648 "uuid": "45cc23b3-b9af-413d-ade7-5ba3ed9c1379" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:50.648 "serial_number": "SPDK00000000000003", 00:07:50.648 "subtype": "NVMe" 00:07:50.648 }, 00:07:50.648 { 00:07:50.648 "allow_any_host": true, 00:07:50.648 "hosts": [], 00:07:50.648 "listen_addresses": [ 00:07:50.648 { 00:07:50.648 "adrfam": "IPv4", 00:07:50.648 "traddr": "10.0.0.2", 00:07:50.648 "transport": "TCP", 00:07:50.648 "trsvcid": "4420", 00:07:50.648 "trtype": "TCP" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "max_cntlid": 65519, 00:07:50.648 "max_namespaces": 32, 00:07:50.648 "min_cntlid": 1, 00:07:50.648 "model_number": "SPDK bdev Controller", 00:07:50.648 "namespaces": [ 00:07:50.648 { 00:07:50.648 "bdev_name": "Null4", 00:07:50.648 "name": "Null4", 00:07:50.648 "nguid": "C60C85A5E99C4CC3BE4C6F800275651C", 00:07:50.648 "nsid": 1, 00:07:50.648 "uuid": "c60c85a5-e99c-4cc3-be4c-6f800275651c" 00:07:50.648 } 00:07:50.648 ], 00:07:50.648 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:50.648 "serial_number": "SPDK00000000000004", 00:07:50.648 "subtype": "NVMe" 00:07:50.648 } 00:07:50.648 ] 00:07:50.648 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.648 02:24:31 -- target/discovery.sh@42 -- # seq 1 4 00:07:50.649 02:24:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.649 02:24:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.649 02:24:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.649 02:24:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:50.649 02:24:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:50.649 02:24:31 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:50.649 02:24:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.649 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:50.649 02:24:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.649 02:24:31 -- target/discovery.sh@49 -- # check_bdevs= 00:07:50.649 02:24:31 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:50.649 02:24:31 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:50.649 02:24:31 -- target/discovery.sh@57 -- # nvmftestfini 00:07:50.649 02:24:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:50.649 02:24:31 -- nvmf/common.sh@116 -- # sync 00:07:50.649 02:24:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:50.649 02:24:31 -- nvmf/common.sh@119 -- # set +e 00:07:50.649 02:24:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:50.649 02:24:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:50.649 rmmod nvme_tcp 00:07:50.910 rmmod nvme_fabrics 00:07:50.910 rmmod nvme_keyring 00:07:50.910 02:24:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:50.910 02:24:31 -- nvmf/common.sh@123 -- # set -e 00:07:50.910 02:24:31 -- nvmf/common.sh@124 -- # return 0 00:07:50.910 02:24:31 -- nvmf/common.sh@477 -- # '[' -n 61495 ']' 00:07:50.910 02:24:31 -- nvmf/common.sh@478 -- # killprocess 61495 00:07:50.910 02:24:31 -- common/autotest_common.sh@936 -- # '[' -z 61495 ']' 00:07:50.910 02:24:31 -- common/autotest_common.sh@940 -- # kill -0 61495 00:07:50.910 02:24:31 -- common/autotest_common.sh@941 -- # uname 00:07:50.910 02:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.910 02:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61495 00:07:50.910 02:24:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.910 killing process with pid 61495 00:07:50.910 02:24:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.910 02:24:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61495' 00:07:50.910 02:24:31 -- common/autotest_common.sh@955 -- # kill 61495 00:07:50.910 [2024-11-21 02:24:31.377285] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:50.910 02:24:31 -- common/autotest_common.sh@960 -- # wait 61495 00:07:51.172 02:24:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:51.172 02:24:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:51.172 02:24:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:51.172 02:24:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.172 02:24:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:51.172 02:24:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.172 02:24:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.172 02:24:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.172 02:24:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:51.172 00:07:51.172 real 0m2.680s 00:07:51.172 user 0m7.136s 00:07:51.172 sys 0m0.695s 00:07:51.172 02:24:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.172 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:51.172 ************************************ 00:07:51.172 END TEST nvmf_discovery 00:07:51.172 ************************************ 00:07:51.172 02:24:31 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:51.172 02:24:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.172 02:24:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.172 02:24:31 -- common/autotest_common.sh@10 -- # set +x 00:07:51.172 ************************************ 00:07:51.172 START TEST nvmf_referrals 00:07:51.172 ************************************ 00:07:51.172 02:24:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:51.433 * Looking for test storage... 00:07:51.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.433 02:24:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.433 02:24:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.433 02:24:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.433 02:24:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.433 02:24:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.433 02:24:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.433 02:24:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.433 02:24:31 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.433 02:24:31 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.433 02:24:31 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.433 02:24:31 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.433 02:24:31 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.433 02:24:31 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.433 02:24:31 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.433 02:24:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.433 02:24:31 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.433 02:24:31 -- scripts/common.sh@344 -- # : 1 00:07:51.433 02:24:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.433 02:24:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.433 02:24:31 -- scripts/common.sh@364 -- # decimal 1 00:07:51.433 02:24:31 -- scripts/common.sh@352 -- # local d=1 00:07:51.433 02:24:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.433 02:24:31 -- scripts/common.sh@354 -- # echo 1 00:07:51.433 02:24:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.433 02:24:31 -- scripts/common.sh@365 -- # decimal 2 00:07:51.433 02:24:31 -- scripts/common.sh@352 -- # local d=2 00:07:51.433 02:24:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.433 02:24:31 -- scripts/common.sh@354 -- # echo 2 00:07:51.433 02:24:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.433 02:24:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.433 02:24:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.433 02:24:31 -- scripts/common.sh@367 -- # return 0 00:07:51.433 02:24:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.433 02:24:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.433 --rc genhtml_branch_coverage=1 00:07:51.433 --rc genhtml_function_coverage=1 00:07:51.433 --rc genhtml_legend=1 00:07:51.433 --rc geninfo_all_blocks=1 00:07:51.433 --rc geninfo_unexecuted_blocks=1 00:07:51.433 00:07:51.433 ' 00:07:51.433 02:24:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.433 --rc genhtml_branch_coverage=1 00:07:51.433 --rc genhtml_function_coverage=1 00:07:51.433 --rc genhtml_legend=1 00:07:51.433 --rc geninfo_all_blocks=1 00:07:51.433 --rc geninfo_unexecuted_blocks=1 00:07:51.433 00:07:51.433 ' 00:07:51.433 02:24:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.433 --rc genhtml_branch_coverage=1 00:07:51.433 --rc genhtml_function_coverage=1 00:07:51.433 --rc genhtml_legend=1 00:07:51.433 --rc geninfo_all_blocks=1 00:07:51.433 --rc geninfo_unexecuted_blocks=1 00:07:51.433 00:07:51.433 ' 00:07:51.433 02:24:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.433 --rc genhtml_branch_coverage=1 00:07:51.433 --rc genhtml_function_coverage=1 00:07:51.433 --rc genhtml_legend=1 00:07:51.433 --rc geninfo_all_blocks=1 00:07:51.433 --rc geninfo_unexecuted_blocks=1 00:07:51.433 00:07:51.433 ' 00:07:51.433 02:24:31 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.433 02:24:31 -- nvmf/common.sh@7 -- # uname -s 00:07:51.433 02:24:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.433 02:24:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.433 02:24:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.433 02:24:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.433 02:24:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.433 02:24:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.433 02:24:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.433 02:24:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.433 02:24:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.433 02:24:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.433 02:24:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:51.433 02:24:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:51.433 02:24:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.433 02:24:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.433 02:24:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.433 02:24:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.433 02:24:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.433 02:24:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.433 02:24:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.433 02:24:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 02:24:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 02:24:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 02:24:31 -- paths/export.sh@5 -- # export PATH 00:07:51.433 02:24:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.434 02:24:31 -- nvmf/common.sh@46 -- # : 0 00:07:51.434 02:24:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:51.434 02:24:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:51.434 02:24:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:51.434 02:24:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.434 02:24:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.434 02:24:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:51.434 02:24:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:51.434 02:24:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:51.434 02:24:31 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:51.434 02:24:31 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:51.434 02:24:31 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:51.434 02:24:31 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:51.434 02:24:31 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:51.434 02:24:31 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:51.434 02:24:31 -- target/referrals.sh@37 -- # nvmftestinit 00:07:51.434 02:24:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:51.434 02:24:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.434 02:24:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:51.434 02:24:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:51.434 02:24:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:51.434 02:24:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.434 02:24:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.434 02:24:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.434 02:24:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:51.434 02:24:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:51.434 02:24:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:51.434 02:24:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:51.434 02:24:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:51.434 02:24:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:51.434 02:24:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.434 02:24:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.434 02:24:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:51.434 02:24:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:51.434 02:24:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.434 02:24:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.434 02:24:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.434 02:24:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.434 02:24:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.434 02:24:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.434 02:24:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.434 02:24:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.434 02:24:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:51.434 02:24:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:51.434 Cannot find device "nvmf_tgt_br" 00:07:51.434 02:24:31 -- nvmf/common.sh@154 -- # true 00:07:51.434 02:24:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.434 Cannot find device "nvmf_tgt_br2" 00:07:51.434 02:24:32 -- nvmf/common.sh@155 -- # true 00:07:51.434 02:24:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:51.434 02:24:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:51.434 Cannot find device "nvmf_tgt_br" 00:07:51.434 02:24:32 -- nvmf/common.sh@157 -- # true 00:07:51.434 02:24:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:51.434 Cannot find device "nvmf_tgt_br2" 00:07:51.434 02:24:32 -- nvmf/common.sh@158 -- # true 00:07:51.434 02:24:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:51.434 02:24:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:51.694 02:24:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.694 02:24:32 -- nvmf/common.sh@161 -- # true 00:07:51.694 02:24:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.694 02:24:32 -- nvmf/common.sh@162 -- # true 00:07:51.694 02:24:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.694 02:24:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.694 02:24:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.694 02:24:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.694 02:24:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.694 02:24:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.694 02:24:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.694 02:24:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:51.694 02:24:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:51.694 02:24:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:51.694 02:24:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:51.694 02:24:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:51.694 02:24:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:51.694 02:24:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:51.694 02:24:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:51.694 02:24:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:51.694 02:24:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:51.694 02:24:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:51.694 02:24:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:51.694 02:24:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:51.694 02:24:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:51.694 02:24:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:51.694 02:24:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:51.694 02:24:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:51.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:51.694 00:07:51.694 --- 10.0.0.2 ping statistics --- 00:07:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.694 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:51.694 02:24:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:51.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:51.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:07:51.694 00:07:51.694 --- 10.0.0.3 ping statistics --- 00:07:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.694 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:51.694 02:24:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:51.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:51.694 00:07:51.694 --- 10.0.0.1 ping statistics --- 00:07:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.694 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:51.694 02:24:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.695 02:24:32 -- nvmf/common.sh@421 -- # return 0 00:07:51.695 02:24:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:51.695 02:24:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.695 02:24:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:51.695 02:24:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:51.695 02:24:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.695 02:24:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:51.695 02:24:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:51.695 02:24:32 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:51.695 02:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:51.695 02:24:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.695 02:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.695 02:24:32 -- nvmf/common.sh@469 -- # nvmfpid=61732 00:07:51.695 02:24:32 -- nvmf/common.sh@470 -- # waitforlisten 61732 00:07:51.695 02:24:32 -- common/autotest_common.sh@829 -- # '[' -z 61732 ']' 00:07:51.695 02:24:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.695 02:24:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.695 02:24:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.953 02:24:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.953 02:24:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.953 02:24:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.953 [2024-11-21 02:24:32.401245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.953 [2024-11-21 02:24:32.401357] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.953 [2024-11-21 02:24:32.544437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.211 [2024-11-21 02:24:32.666347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:52.211 [2024-11-21 02:24:32.666539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.211 [2024-11-21 02:24:32.666556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.211 [2024-11-21 02:24:32.666568] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.211 [2024-11-21 02:24:32.666773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.211 [2024-11-21 02:24:32.667304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.211 [2024-11-21 02:24:32.668006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.211 [2024-11-21 02:24:32.668061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.778 02:24:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.778 02:24:33 -- common/autotest_common.sh@862 -- # return 0 00:07:52.778 02:24:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:52.778 02:24:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.778 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:52.778 02:24:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.778 02:24:33 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.778 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.778 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:52.778 [2024-11-21 02:24:33.403880] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:53.036 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.036 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.036 [2024-11-21 02:24:33.435357] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:53.036 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.036 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:53.036 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.036 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:53.036 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.036 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@48 -- # jq length 00:07:53.036 02:24:33 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.036 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.036 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:53.036 02:24:33 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:53.036 02:24:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.036 02:24:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.036 02:24:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.036 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.036 02:24:33 -- target/referrals.sh@21 -- # sort 00:07:53.036 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.036 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.036 02:24:33 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.036 02:24:33 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:53.036 02:24:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.036 02:24:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.036 02:24:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.036 02:24:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.036 02:24:33 -- target/referrals.sh@26 -- # sort 00:07:53.295 02:24:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.295 02:24:33 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.295 02:24:33 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:53.295 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.295 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.295 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.295 02:24:33 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:53.295 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.295 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.295 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.295 02:24:33 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:53.295 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.295 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.295 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.295 02:24:33 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.295 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.295 02:24:33 -- target/referrals.sh@56 -- # jq length 00:07:53.295 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.295 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.295 02:24:33 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:53.295 02:24:33 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:53.295 02:24:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.295 02:24:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.295 02:24:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.295 02:24:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.295 02:24:33 -- target/referrals.sh@26 -- # sort 00:07:53.554 02:24:33 -- target/referrals.sh@26 -- # echo 00:07:53.554 02:24:33 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:53.554 02:24:33 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:53.554 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.554 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.554 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.554 02:24:33 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.554 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.554 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.554 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.554 02:24:33 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:53.554 02:24:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.554 02:24:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.554 02:24:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.554 02:24:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.554 02:24:33 -- common/autotest_common.sh@10 -- # set +x 00:07:53.554 02:24:33 -- target/referrals.sh@21 -- # sort 00:07:53.554 02:24:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.554 02:24:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:53.554 02:24:34 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.554 02:24:34 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:53.554 02:24:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.554 02:24:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.554 02:24:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.554 02:24:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.554 02:24:34 -- target/referrals.sh@26 -- # sort 00:07:53.554 02:24:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:53.554 02:24:34 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.554 02:24:34 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:53.554 02:24:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.554 02:24:34 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:53.554 02:24:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.554 02:24:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.813 02:24:34 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:53.813 02:24:34 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.813 02:24:34 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:53.814 02:24:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.814 02:24:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.814 02:24:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.814 02:24:34 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.814 02:24:34 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.814 02:24:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.814 02:24:34 -- common/autotest_common.sh@10 -- # set +x 00:07:53.814 02:24:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.814 02:24:34 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:53.814 02:24:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.814 02:24:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.814 02:24:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.814 02:24:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.814 02:24:34 -- target/referrals.sh@21 -- # sort 00:07:53.814 02:24:34 -- common/autotest_common.sh@10 -- # set +x 00:07:53.814 02:24:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.814 02:24:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:53.814 02:24:34 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.814 02:24:34 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:53.814 02:24:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.814 02:24:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.814 02:24:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.814 02:24:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.814 02:24:34 -- target/referrals.sh@26 -- # sort 00:07:54.072 02:24:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:54.072 02:24:34 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:54.072 02:24:34 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:54.072 02:24:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:54.072 02:24:34 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:54.072 02:24:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.072 02:24:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.072 02:24:34 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:54.072 02:24:34 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.072 02:24:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.072 02:24:34 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:54.072 02:24:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.072 02:24:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.331 02:24:34 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.331 02:24:34 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:54.331 02:24:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 02:24:34 -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 02:24:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 02:24:34 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.331 02:24:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 02:24:34 -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 02:24:34 -- target/referrals.sh@82 -- # jq length 00:07:54.331 02:24:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 02:24:34 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:54.331 02:24:34 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:54.331 02:24:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.331 02:24:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.331 02:24:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.331 02:24:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.331 02:24:34 -- target/referrals.sh@26 -- # sort 00:07:54.589 02:24:35 -- target/referrals.sh@26 -- # echo 00:07:54.589 02:24:35 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:54.589 02:24:35 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:54.589 02:24:35 -- target/referrals.sh@86 -- # nvmftestfini 00:07:54.589 02:24:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:54.589 02:24:35 -- nvmf/common.sh@116 -- # sync 00:07:54.589 02:24:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:54.589 02:24:35 -- nvmf/common.sh@119 -- # set +e 00:07:54.589 02:24:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:54.589 02:24:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:54.589 rmmod nvme_tcp 00:07:54.589 rmmod nvme_fabrics 00:07:54.589 rmmod nvme_keyring 00:07:54.589 02:24:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:54.589 02:24:35 -- nvmf/common.sh@123 -- # set -e 00:07:54.589 02:24:35 -- nvmf/common.sh@124 -- # return 0 00:07:54.589 02:24:35 -- nvmf/common.sh@477 -- # '[' -n 61732 ']' 00:07:54.589 02:24:35 -- nvmf/common.sh@478 -- # killprocess 61732 00:07:54.589 02:24:35 -- common/autotest_common.sh@936 -- # '[' -z 61732 ']' 00:07:54.589 02:24:35 -- common/autotest_common.sh@940 -- # kill -0 61732 00:07:54.589 02:24:35 -- common/autotest_common.sh@941 -- # uname 00:07:54.589 02:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:54.589 02:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61732 00:07:54.589 02:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:54.589 02:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:54.589 killing process with pid 61732 00:07:54.589 02:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61732' 00:07:54.589 02:24:35 -- common/autotest_common.sh@955 -- # kill 61732 00:07:54.589 02:24:35 -- common/autotest_common.sh@960 -- # wait 61732 00:07:54.847 02:24:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:54.847 02:24:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:54.847 02:24:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:54.847 02:24:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.847 02:24:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:54.847 02:24:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.847 02:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.847 02:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.847 02:24:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:54.847 00:07:54.847 real 0m3.699s 00:07:54.847 user 0m11.962s 00:07:54.847 sys 0m0.957s 00:07:54.847 02:24:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.847 ************************************ 00:07:54.847 END TEST nvmf_referrals 00:07:54.847 ************************************ 00:07:54.847 02:24:35 -- common/autotest_common.sh@10 -- # set +x 00:07:55.107 02:24:35 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:55.107 02:24:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:55.107 02:24:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.107 02:24:35 -- common/autotest_common.sh@10 -- # set +x 00:07:55.107 ************************************ 00:07:55.107 START TEST nvmf_connect_disconnect 00:07:55.107 ************************************ 00:07:55.107 02:24:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:55.107 * Looking for test storage... 00:07:55.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:55.107 02:24:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:55.107 02:24:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:55.107 02:24:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:55.107 02:24:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:55.107 02:24:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:55.107 02:24:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:55.107 02:24:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:55.107 02:24:35 -- scripts/common.sh@335 -- # IFS=.-: 00:07:55.107 02:24:35 -- scripts/common.sh@335 -- # read -ra ver1 00:07:55.107 02:24:35 -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.107 02:24:35 -- scripts/common.sh@336 -- # read -ra ver2 00:07:55.107 02:24:35 -- scripts/common.sh@337 -- # local 'op=<' 00:07:55.107 02:24:35 -- scripts/common.sh@339 -- # ver1_l=2 00:07:55.107 02:24:35 -- scripts/common.sh@340 -- # ver2_l=1 00:07:55.107 02:24:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:55.107 02:24:35 -- scripts/common.sh@343 -- # case "$op" in 00:07:55.107 02:24:35 -- scripts/common.sh@344 -- # : 1 00:07:55.107 02:24:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:55.107 02:24:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.107 02:24:35 -- scripts/common.sh@364 -- # decimal 1 00:07:55.107 02:24:35 -- scripts/common.sh@352 -- # local d=1 00:07:55.107 02:24:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.107 02:24:35 -- scripts/common.sh@354 -- # echo 1 00:07:55.107 02:24:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:55.107 02:24:35 -- scripts/common.sh@365 -- # decimal 2 00:07:55.107 02:24:35 -- scripts/common.sh@352 -- # local d=2 00:07:55.107 02:24:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.107 02:24:35 -- scripts/common.sh@354 -- # echo 2 00:07:55.107 02:24:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:55.107 02:24:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:55.107 02:24:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:55.107 02:24:35 -- scripts/common.sh@367 -- # return 0 00:07:55.107 02:24:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.107 02:24:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:55.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.107 --rc genhtml_branch_coverage=1 00:07:55.107 --rc genhtml_function_coverage=1 00:07:55.107 --rc genhtml_legend=1 00:07:55.107 --rc geninfo_all_blocks=1 00:07:55.107 --rc geninfo_unexecuted_blocks=1 00:07:55.107 00:07:55.107 ' 00:07:55.107 02:24:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:55.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.107 --rc genhtml_branch_coverage=1 00:07:55.107 --rc genhtml_function_coverage=1 00:07:55.107 --rc genhtml_legend=1 00:07:55.107 --rc geninfo_all_blocks=1 00:07:55.107 --rc geninfo_unexecuted_blocks=1 00:07:55.107 00:07:55.107 ' 00:07:55.107 02:24:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:55.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.107 --rc genhtml_branch_coverage=1 00:07:55.107 --rc genhtml_function_coverage=1 00:07:55.107 --rc genhtml_legend=1 00:07:55.107 --rc geninfo_all_blocks=1 00:07:55.107 --rc geninfo_unexecuted_blocks=1 00:07:55.107 00:07:55.107 ' 00:07:55.107 02:24:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:55.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.107 --rc genhtml_branch_coverage=1 00:07:55.107 --rc genhtml_function_coverage=1 00:07:55.107 --rc genhtml_legend=1 00:07:55.107 --rc geninfo_all_blocks=1 00:07:55.107 --rc geninfo_unexecuted_blocks=1 00:07:55.107 00:07:55.107 ' 00:07:55.107 02:24:35 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:55.107 02:24:35 -- nvmf/common.sh@7 -- # uname -s 00:07:55.107 02:24:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.107 02:24:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.107 02:24:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.107 02:24:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.107 02:24:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.107 02:24:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.107 02:24:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.107 02:24:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.107 02:24:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.107 02:24:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.107 02:24:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:55.107 02:24:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:07:55.107 02:24:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.107 02:24:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.107 02:24:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:55.107 02:24:35 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.107 02:24:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.107 02:24:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.107 02:24:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.107 02:24:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.107 02:24:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.107 02:24:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.107 02:24:35 -- paths/export.sh@5 -- # export PATH 00:07:55.108 02:24:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.108 02:24:35 -- nvmf/common.sh@46 -- # : 0 00:07:55.108 02:24:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:55.108 02:24:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:55.108 02:24:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:55.108 02:24:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.108 02:24:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.108 02:24:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:55.108 02:24:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:55.108 02:24:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:55.108 02:24:35 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:55.108 02:24:35 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:55.108 02:24:35 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:55.108 02:24:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:55.108 02:24:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.108 02:24:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:55.108 02:24:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:55.108 02:24:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:55.108 02:24:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.108 02:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.108 02:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.108 02:24:35 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:55.108 02:24:35 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:55.108 02:24:35 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:55.108 02:24:35 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:55.108 02:24:35 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:55.108 02:24:35 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:55.108 02:24:35 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.108 02:24:35 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.108 02:24:35 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:55.108 02:24:35 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:55.108 02:24:35 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:55.108 02:24:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:55.108 02:24:35 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:55.108 02:24:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.108 02:24:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:55.108 02:24:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:55.108 02:24:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:55.108 02:24:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:55.108 02:24:35 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:55.108 02:24:35 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:55.108 Cannot find device "nvmf_tgt_br" 00:07:55.108 02:24:35 -- nvmf/common.sh@154 -- # true 00:07:55.108 02:24:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:55.367 Cannot find device "nvmf_tgt_br2" 00:07:55.367 02:24:35 -- nvmf/common.sh@155 -- # true 00:07:55.367 02:24:35 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:55.367 02:24:35 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:55.367 Cannot find device "nvmf_tgt_br" 00:07:55.367 02:24:35 -- nvmf/common.sh@157 -- # true 00:07:55.367 02:24:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:55.367 Cannot find device "nvmf_tgt_br2" 00:07:55.367 02:24:35 -- nvmf/common.sh@158 -- # true 00:07:55.367 02:24:35 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:55.367 02:24:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:55.367 02:24:35 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:55.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.367 02:24:35 -- nvmf/common.sh@161 -- # true 00:07:55.367 02:24:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:55.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:55.367 02:24:35 -- nvmf/common.sh@162 -- # true 00:07:55.367 02:24:35 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:55.367 02:24:35 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:55.367 02:24:35 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:55.367 02:24:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:55.367 02:24:35 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:55.367 02:24:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:55.367 02:24:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:55.367 02:24:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:55.367 02:24:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:55.367 02:24:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:55.367 02:24:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:55.367 02:24:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:55.367 02:24:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:55.367 02:24:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:55.367 02:24:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:55.367 02:24:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:55.367 02:24:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:55.367 02:24:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:55.367 02:24:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:55.367 02:24:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:55.367 02:24:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:55.625 02:24:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:55.625 02:24:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:55.626 02:24:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:55.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:07:55.626 00:07:55.626 --- 10.0.0.2 ping statistics --- 00:07:55.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.626 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:55.626 02:24:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:55.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:55.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:07:55.626 00:07:55.626 --- 10.0.0.3 ping statistics --- 00:07:55.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.626 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:07:55.626 02:24:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:55.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:55.626 00:07:55.626 --- 10.0.0.1 ping statistics --- 00:07:55.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.626 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:55.626 02:24:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.626 02:24:36 -- nvmf/common.sh@421 -- # return 0 00:07:55.626 02:24:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:55.626 02:24:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.626 02:24:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:55.626 02:24:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:55.626 02:24:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.626 02:24:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:55.626 02:24:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:55.626 02:24:36 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:55.626 02:24:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:55.626 02:24:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.626 02:24:36 -- common/autotest_common.sh@10 -- # set +x 00:07:55.626 02:24:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.626 02:24:36 -- nvmf/common.sh@469 -- # nvmfpid=62048 00:07:55.626 02:24:36 -- nvmf/common.sh@470 -- # waitforlisten 62048 00:07:55.626 02:24:36 -- common/autotest_common.sh@829 -- # '[' -z 62048 ']' 00:07:55.626 02:24:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.626 02:24:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.626 02:24:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.626 02:24:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.626 02:24:36 -- common/autotest_common.sh@10 -- # set +x 00:07:55.626 [2024-11-21 02:24:36.124653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.626 [2024-11-21 02:24:36.124782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.626 [2024-11-21 02:24:36.265185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.884 [2024-11-21 02:24:36.372267] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.884 [2024-11-21 02:24:36.372454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.884 [2024-11-21 02:24:36.372471] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.884 [2024-11-21 02:24:36.372482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.884 [2024-11-21 02:24:36.372663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.884 [2024-11-21 02:24:36.372825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.884 [2024-11-21 02:24:36.373640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.884 [2024-11-21 02:24:36.373661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.819 02:24:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.819 02:24:37 -- common/autotest_common.sh@862 -- # return 0 00:07:56.819 02:24:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:56.819 02:24:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.819 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:56.819 02:24:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:56.819 02:24:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.819 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:56.819 [2024-11-21 02:24:37.206971] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.819 02:24:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:56.819 02:24:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.819 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:56.819 02:24:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:56.819 02:24:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.819 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:56.819 02:24:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.819 02:24:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.819 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:56.819 02:24:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.819 02:24:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.819 02:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:56.819 [2024-11-21 02:24:37.290968] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.819 02:24:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:56.819 02:24:37 -- target/connect_disconnect.sh@34 -- # set +x 00:07:59.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.976 02:28:22 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:41.976 02:28:22 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:41.976 02:28:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:41.976 02:28:22 -- nvmf/common.sh@116 -- # sync 00:11:41.976 02:28:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:41.976 02:28:22 -- nvmf/common.sh@119 -- # set +e 00:11:41.976 02:28:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:41.976 02:28:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:41.976 rmmod nvme_tcp 00:11:41.976 rmmod nvme_fabrics 00:11:41.976 rmmod nvme_keyring 00:11:41.976 02:28:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:41.976 02:28:22 -- nvmf/common.sh@123 -- # set -e 00:11:41.976 02:28:22 -- nvmf/common.sh@124 -- # return 0 00:11:41.976 02:28:22 -- nvmf/common.sh@477 -- # '[' -n 62048 ']' 00:11:41.976 02:28:22 -- nvmf/common.sh@478 -- # killprocess 62048 00:11:41.976 02:28:22 -- common/autotest_common.sh@936 -- # '[' -z 62048 ']' 00:11:41.976 02:28:22 -- common/autotest_common.sh@940 -- # kill -0 62048 00:11:41.976 02:28:22 -- common/autotest_common.sh@941 -- # uname 00:11:41.976 02:28:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.976 02:28:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62048 00:11:41.976 killing process with pid 62048 00:11:41.976 02:28:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:41.976 02:28:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:41.976 02:28:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62048' 00:11:41.976 02:28:22 -- common/autotest_common.sh@955 -- # kill 62048 00:11:41.976 02:28:22 -- common/autotest_common.sh@960 -- # wait 62048 00:11:41.976 02:28:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:41.976 02:28:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:41.976 02:28:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:41.976 02:28:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.976 02:28:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:41.976 02:28:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.976 02:28:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.976 02:28:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.234 02:28:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:42.234 ************************************ 00:11:42.234 END TEST nvmf_connect_disconnect 00:11:42.234 ************************************ 00:11:42.234 00:11:42.234 real 3m47.111s 00:11:42.234 user 14m44.065s 00:11:42.234 sys 0m23.181s 00:11:42.234 02:28:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:42.234 02:28:22 -- common/autotest_common.sh@10 -- # set +x 00:11:42.234 02:28:22 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:42.234 02:28:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:42.234 02:28:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.234 02:28:22 -- common/autotest_common.sh@10 -- # set +x 00:11:42.234 ************************************ 00:11:42.234 START TEST nvmf_multitarget 00:11:42.234 ************************************ 00:11:42.234 02:28:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:42.234 * Looking for test storage... 00:11:42.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.234 02:28:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:42.234 02:28:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:42.234 02:28:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:42.234 02:28:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:42.234 02:28:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:42.234 02:28:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:42.234 02:28:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:42.234 02:28:22 -- scripts/common.sh@335 -- # IFS=.-: 00:11:42.234 02:28:22 -- scripts/common.sh@335 -- # read -ra ver1 00:11:42.234 02:28:22 -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.234 02:28:22 -- scripts/common.sh@336 -- # read -ra ver2 00:11:42.234 02:28:22 -- scripts/common.sh@337 -- # local 'op=<' 00:11:42.234 02:28:22 -- scripts/common.sh@339 -- # ver1_l=2 00:11:42.234 02:28:22 -- scripts/common.sh@340 -- # ver2_l=1 00:11:42.234 02:28:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:42.234 02:28:22 -- scripts/common.sh@343 -- # case "$op" in 00:11:42.234 02:28:22 -- scripts/common.sh@344 -- # : 1 00:11:42.234 02:28:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:42.234 02:28:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.234 02:28:22 -- scripts/common.sh@364 -- # decimal 1 00:11:42.234 02:28:22 -- scripts/common.sh@352 -- # local d=1 00:11:42.234 02:28:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.234 02:28:22 -- scripts/common.sh@354 -- # echo 1 00:11:42.234 02:28:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:42.234 02:28:22 -- scripts/common.sh@365 -- # decimal 2 00:11:42.493 02:28:22 -- scripts/common.sh@352 -- # local d=2 00:11:42.493 02:28:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.493 02:28:22 -- scripts/common.sh@354 -- # echo 2 00:11:42.493 02:28:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:42.493 02:28:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:42.493 02:28:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:42.493 02:28:22 -- scripts/common.sh@367 -- # return 0 00:11:42.493 02:28:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.493 02:28:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.493 --rc genhtml_branch_coverage=1 00:11:42.493 --rc genhtml_function_coverage=1 00:11:42.493 --rc genhtml_legend=1 00:11:42.493 --rc geninfo_all_blocks=1 00:11:42.493 --rc geninfo_unexecuted_blocks=1 00:11:42.493 00:11:42.493 ' 00:11:42.493 02:28:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.493 --rc genhtml_branch_coverage=1 00:11:42.493 --rc genhtml_function_coverage=1 00:11:42.494 --rc genhtml_legend=1 00:11:42.494 --rc geninfo_all_blocks=1 00:11:42.494 --rc geninfo_unexecuted_blocks=1 00:11:42.494 00:11:42.494 ' 00:11:42.494 02:28:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:42.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.494 --rc genhtml_branch_coverage=1 00:11:42.494 --rc genhtml_function_coverage=1 00:11:42.494 --rc genhtml_legend=1 00:11:42.494 --rc geninfo_all_blocks=1 00:11:42.494 --rc geninfo_unexecuted_blocks=1 00:11:42.494 00:11:42.494 ' 00:11:42.494 02:28:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:42.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.494 --rc genhtml_branch_coverage=1 00:11:42.494 --rc genhtml_function_coverage=1 00:11:42.494 --rc genhtml_legend=1 00:11:42.494 --rc geninfo_all_blocks=1 00:11:42.494 --rc geninfo_unexecuted_blocks=1 00:11:42.494 00:11:42.494 ' 00:11:42.494 02:28:22 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.494 02:28:22 -- nvmf/common.sh@7 -- # uname -s 00:11:42.494 02:28:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.494 02:28:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.494 02:28:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.494 02:28:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.494 02:28:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.494 02:28:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.494 02:28:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.494 02:28:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.494 02:28:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.494 02:28:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.494 02:28:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:11:42.494 02:28:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:11:42.494 02:28:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.494 02:28:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.494 02:28:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.494 02:28:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.494 02:28:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.494 02:28:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.494 02:28:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.494 02:28:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.494 02:28:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.494 02:28:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.494 02:28:22 -- paths/export.sh@5 -- # export PATH 00:11:42.494 02:28:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.494 02:28:22 -- nvmf/common.sh@46 -- # : 0 00:11:42.494 02:28:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:42.494 02:28:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:42.494 02:28:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:42.494 02:28:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.494 02:28:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.494 02:28:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:42.494 02:28:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:42.494 02:28:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:42.494 02:28:22 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:42.494 02:28:22 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:42.494 02:28:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:42.494 02:28:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.494 02:28:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:42.494 02:28:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:42.494 02:28:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:42.494 02:28:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.494 02:28:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.494 02:28:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.494 02:28:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:42.494 02:28:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:42.494 02:28:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:42.494 02:28:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:42.494 02:28:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:42.494 02:28:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:42.494 02:28:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.494 02:28:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.494 02:28:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:42.494 02:28:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:42.494 02:28:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.494 02:28:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.494 02:28:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.494 02:28:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.494 02:28:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.494 02:28:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.494 02:28:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.494 02:28:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.494 02:28:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:42.494 02:28:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:42.494 Cannot find device "nvmf_tgt_br" 00:11:42.494 02:28:22 -- nvmf/common.sh@154 -- # true 00:11:42.494 02:28:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.494 Cannot find device "nvmf_tgt_br2" 00:11:42.494 02:28:22 -- nvmf/common.sh@155 -- # true 00:11:42.494 02:28:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:42.494 02:28:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:42.494 Cannot find device "nvmf_tgt_br" 00:11:42.494 02:28:22 -- nvmf/common.sh@157 -- # true 00:11:42.494 02:28:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:42.494 Cannot find device "nvmf_tgt_br2" 00:11:42.494 02:28:22 -- nvmf/common.sh@158 -- # true 00:11:42.494 02:28:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:42.494 02:28:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:42.494 02:28:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.494 02:28:23 -- nvmf/common.sh@161 -- # true 00:11:42.494 02:28:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.494 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.494 02:28:23 -- nvmf/common.sh@162 -- # true 00:11:42.494 02:28:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.494 02:28:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.494 02:28:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.494 02:28:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.494 02:28:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.494 02:28:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.494 02:28:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.494 02:28:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:42.494 02:28:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:42.494 02:28:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:42.494 02:28:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:42.494 02:28:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:42.494 02:28:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:42.494 02:28:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.494 02:28:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.753 02:28:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.753 02:28:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:42.753 02:28:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:42.753 02:28:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.753 02:28:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.753 02:28:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.753 02:28:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.753 02:28:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.753 02:28:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:42.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:11:42.753 00:11:42.753 --- 10.0.0.2 ping statistics --- 00:11:42.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.753 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:42.753 02:28:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:42.753 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.753 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:42.753 00:11:42.753 --- 10.0.0.3 ping statistics --- 00:11:42.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.753 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:42.753 02:28:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:42.753 00:11:42.753 --- 10.0.0.1 ping statistics --- 00:11:42.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.753 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:42.753 02:28:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.753 02:28:23 -- nvmf/common.sh@421 -- # return 0 00:11:42.753 02:28:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.753 02:28:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.753 02:28:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.753 02:28:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.753 02:28:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.753 02:28:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.753 02:28:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:42.753 02:28:23 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:42.753 02:28:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:42.753 02:28:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:42.753 02:28:23 -- common/autotest_common.sh@10 -- # set +x 00:11:42.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.753 02:28:23 -- nvmf/common.sh@469 -- # nvmfpid=65843 00:11:42.753 02:28:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.753 02:28:23 -- nvmf/common.sh@470 -- # waitforlisten 65843 00:11:42.753 02:28:23 -- common/autotest_common.sh@829 -- # '[' -z 65843 ']' 00:11:42.753 02:28:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.753 02:28:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.754 02:28:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.754 02:28:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.754 02:28:23 -- common/autotest_common.sh@10 -- # set +x 00:11:42.754 [2024-11-21 02:28:23.297866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:42.754 [2024-11-21 02:28:23.298137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.012 [2024-11-21 02:28:23.431196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.012 [2024-11-21 02:28:23.532182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.012 [2024-11-21 02:28:23.532625] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.012 [2024-11-21 02:28:23.532800] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.012 [2024-11-21 02:28:23.533021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.012 [2024-11-21 02:28:23.533353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.012 [2024-11-21 02:28:23.533462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.012 [2024-11-21 02:28:23.533511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.012 [2024-11-21 02:28:23.533506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.947 02:28:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.947 02:28:24 -- common/autotest_common.sh@862 -- # return 0 00:11:43.947 02:28:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.947 02:28:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:43.947 02:28:24 -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 02:28:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.947 02:28:24 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:43.947 02:28:24 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:43.947 02:28:24 -- target/multitarget.sh@21 -- # jq length 00:11:43.947 02:28:24 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:43.947 02:28:24 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:44.205 "nvmf_tgt_1" 00:11:44.205 02:28:24 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:44.205 "nvmf_tgt_2" 00:11:44.205 02:28:24 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:44.205 02:28:24 -- target/multitarget.sh@28 -- # jq length 00:11:44.464 02:28:24 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:44.464 02:28:24 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:44.464 true 00:11:44.464 02:28:25 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:44.722 true 00:11:44.722 02:28:25 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:44.722 02:28:25 -- target/multitarget.sh@35 -- # jq length 00:11:44.722 02:28:25 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:44.722 02:28:25 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:44.722 02:28:25 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:44.722 02:28:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:44.722 02:28:25 -- nvmf/common.sh@116 -- # sync 00:11:44.722 02:28:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:44.722 02:28:25 -- nvmf/common.sh@119 -- # set +e 00:11:44.722 02:28:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:44.722 02:28:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:44.722 rmmod nvme_tcp 00:11:44.981 rmmod nvme_fabrics 00:11:44.981 rmmod nvme_keyring 00:11:44.981 02:28:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:44.981 02:28:25 -- nvmf/common.sh@123 -- # set -e 00:11:44.981 02:28:25 -- nvmf/common.sh@124 -- # return 0 00:11:44.981 02:28:25 -- nvmf/common.sh@477 -- # '[' -n 65843 ']' 00:11:44.981 02:28:25 -- nvmf/common.sh@478 -- # killprocess 65843 00:11:44.981 02:28:25 -- common/autotest_common.sh@936 -- # '[' -z 65843 ']' 00:11:44.981 02:28:25 -- common/autotest_common.sh@940 -- # kill -0 65843 00:11:44.981 02:28:25 -- common/autotest_common.sh@941 -- # uname 00:11:44.981 02:28:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.981 02:28:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65843 00:11:44.981 killing process with pid 65843 00:11:44.981 02:28:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.981 02:28:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.981 02:28:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65843' 00:11:44.981 02:28:25 -- common/autotest_common.sh@955 -- # kill 65843 00:11:44.981 02:28:25 -- common/autotest_common.sh@960 -- # wait 65843 00:11:45.238 02:28:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:45.238 02:28:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:45.238 02:28:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:45.238 02:28:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.238 02:28:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:45.238 02:28:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.238 02:28:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.238 02:28:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.238 02:28:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:45.238 ************************************ 00:11:45.238 END TEST nvmf_multitarget 00:11:45.238 ************************************ 00:11:45.238 00:11:45.238 real 0m3.104s 00:11:45.238 user 0m10.017s 00:11:45.238 sys 0m0.757s 00:11:45.238 02:28:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.238 02:28:25 -- common/autotest_common.sh@10 -- # set +x 00:11:45.238 02:28:25 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:45.238 02:28:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:45.238 02:28:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.238 02:28:25 -- common/autotest_common.sh@10 -- # set +x 00:11:45.238 ************************************ 00:11:45.238 START TEST nvmf_rpc 00:11:45.238 ************************************ 00:11:45.238 02:28:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:45.497 * Looking for test storage... 00:11:45.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:45.497 02:28:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:45.497 02:28:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:45.497 02:28:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:45.497 02:28:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:45.497 02:28:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:45.497 02:28:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:45.497 02:28:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:45.497 02:28:25 -- scripts/common.sh@335 -- # IFS=.-: 00:11:45.497 02:28:25 -- scripts/common.sh@335 -- # read -ra ver1 00:11:45.497 02:28:25 -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.497 02:28:25 -- scripts/common.sh@336 -- # read -ra ver2 00:11:45.497 02:28:25 -- scripts/common.sh@337 -- # local 'op=<' 00:11:45.497 02:28:25 -- scripts/common.sh@339 -- # ver1_l=2 00:11:45.497 02:28:25 -- scripts/common.sh@340 -- # ver2_l=1 00:11:45.497 02:28:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:45.497 02:28:25 -- scripts/common.sh@343 -- # case "$op" in 00:11:45.497 02:28:25 -- scripts/common.sh@344 -- # : 1 00:11:45.497 02:28:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:45.497 02:28:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.497 02:28:25 -- scripts/common.sh@364 -- # decimal 1 00:11:45.497 02:28:25 -- scripts/common.sh@352 -- # local d=1 00:11:45.497 02:28:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.497 02:28:25 -- scripts/common.sh@354 -- # echo 1 00:11:45.497 02:28:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:45.497 02:28:25 -- scripts/common.sh@365 -- # decimal 2 00:11:45.497 02:28:25 -- scripts/common.sh@352 -- # local d=2 00:11:45.497 02:28:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.497 02:28:25 -- scripts/common.sh@354 -- # echo 2 00:11:45.497 02:28:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:45.497 02:28:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:45.497 02:28:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:45.497 02:28:26 -- scripts/common.sh@367 -- # return 0 00:11:45.497 02:28:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.497 02:28:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:45.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.497 --rc genhtml_branch_coverage=1 00:11:45.497 --rc genhtml_function_coverage=1 00:11:45.497 --rc genhtml_legend=1 00:11:45.497 --rc geninfo_all_blocks=1 00:11:45.497 --rc geninfo_unexecuted_blocks=1 00:11:45.497 00:11:45.497 ' 00:11:45.497 02:28:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:45.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.497 --rc genhtml_branch_coverage=1 00:11:45.497 --rc genhtml_function_coverage=1 00:11:45.497 --rc genhtml_legend=1 00:11:45.497 --rc geninfo_all_blocks=1 00:11:45.497 --rc geninfo_unexecuted_blocks=1 00:11:45.497 00:11:45.497 ' 00:11:45.497 02:28:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:45.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.497 --rc genhtml_branch_coverage=1 00:11:45.497 --rc genhtml_function_coverage=1 00:11:45.497 --rc genhtml_legend=1 00:11:45.497 --rc geninfo_all_blocks=1 00:11:45.497 --rc geninfo_unexecuted_blocks=1 00:11:45.497 00:11:45.497 ' 00:11:45.497 02:28:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:45.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.497 --rc genhtml_branch_coverage=1 00:11:45.497 --rc genhtml_function_coverage=1 00:11:45.497 --rc genhtml_legend=1 00:11:45.497 --rc geninfo_all_blocks=1 00:11:45.497 --rc geninfo_unexecuted_blocks=1 00:11:45.497 00:11:45.497 ' 00:11:45.497 02:28:26 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.497 02:28:26 -- nvmf/common.sh@7 -- # uname -s 00:11:45.497 02:28:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.497 02:28:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.497 02:28:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.497 02:28:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.497 02:28:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.497 02:28:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.497 02:28:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.497 02:28:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.497 02:28:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.497 02:28:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.497 02:28:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:11:45.497 02:28:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:11:45.497 02:28:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.497 02:28:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.497 02:28:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.497 02:28:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.497 02:28:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.497 02:28:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.497 02:28:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.497 02:28:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.497 02:28:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.497 02:28:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.497 02:28:26 -- paths/export.sh@5 -- # export PATH 00:11:45.497 02:28:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.497 02:28:26 -- nvmf/common.sh@46 -- # : 0 00:11:45.497 02:28:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:45.497 02:28:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:45.497 02:28:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:45.497 02:28:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.497 02:28:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.497 02:28:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:45.497 02:28:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:45.497 02:28:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:45.497 02:28:26 -- target/rpc.sh@11 -- # loops=5 00:11:45.497 02:28:26 -- target/rpc.sh@23 -- # nvmftestinit 00:11:45.497 02:28:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:45.497 02:28:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.497 02:28:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:45.497 02:28:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:45.497 02:28:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:45.497 02:28:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.497 02:28:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.497 02:28:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.497 02:28:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:45.497 02:28:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:45.497 02:28:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:45.497 02:28:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:45.497 02:28:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:45.497 02:28:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:45.497 02:28:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.498 02:28:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.498 02:28:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:45.498 02:28:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:45.498 02:28:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.498 02:28:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.498 02:28:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.498 02:28:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.498 02:28:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.498 02:28:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.498 02:28:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.498 02:28:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.498 02:28:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:45.498 02:28:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:45.498 Cannot find device "nvmf_tgt_br" 00:11:45.498 02:28:26 -- nvmf/common.sh@154 -- # true 00:11:45.498 02:28:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.498 Cannot find device "nvmf_tgt_br2" 00:11:45.498 02:28:26 -- nvmf/common.sh@155 -- # true 00:11:45.498 02:28:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:45.498 02:28:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:45.498 Cannot find device "nvmf_tgt_br" 00:11:45.498 02:28:26 -- nvmf/common.sh@157 -- # true 00:11:45.498 02:28:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:45.498 Cannot find device "nvmf_tgt_br2" 00:11:45.498 02:28:26 -- nvmf/common.sh@158 -- # true 00:11:45.498 02:28:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:45.756 02:28:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:45.756 02:28:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.756 02:28:26 -- nvmf/common.sh@161 -- # true 00:11:45.756 02:28:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.756 02:28:26 -- nvmf/common.sh@162 -- # true 00:11:45.756 02:28:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.756 02:28:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.756 02:28:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.756 02:28:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.756 02:28:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.756 02:28:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.756 02:28:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.756 02:28:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.756 02:28:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.756 02:28:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:45.756 02:28:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:45.756 02:28:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:45.756 02:28:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:45.756 02:28:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.756 02:28:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.756 02:28:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.756 02:28:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:45.756 02:28:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:45.756 02:28:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.756 02:28:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.756 02:28:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.756 02:28:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.756 02:28:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.756 02:28:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:45.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:11:45.756 00:11:45.756 --- 10.0.0.2 ping statistics --- 00:11:45.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.756 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:45.756 02:28:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:45.756 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.756 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:11:45.756 00:11:45.756 --- 10.0.0.3 ping statistics --- 00:11:45.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.756 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:45.756 02:28:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:46.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:46.015 00:11:46.015 --- 10.0.0.1 ping statistics --- 00:11:46.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.015 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:46.015 02:28:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.015 02:28:26 -- nvmf/common.sh@421 -- # return 0 00:11:46.015 02:28:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:46.015 02:28:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.015 02:28:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:46.015 02:28:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:46.015 02:28:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.015 02:28:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:46.015 02:28:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:46.015 02:28:26 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:46.015 02:28:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:46.015 02:28:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.015 02:28:26 -- common/autotest_common.sh@10 -- # set +x 00:11:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.015 02:28:26 -- nvmf/common.sh@469 -- # nvmfpid=66090 00:11:46.015 02:28:26 -- nvmf/common.sh@470 -- # waitforlisten 66090 00:11:46.015 02:28:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.015 02:28:26 -- common/autotest_common.sh@829 -- # '[' -z 66090 ']' 00:11:46.015 02:28:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.015 02:28:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.015 02:28:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.015 02:28:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.015 02:28:26 -- common/autotest_common.sh@10 -- # set +x 00:11:46.015 [2024-11-21 02:28:26.499539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:46.015 [2024-11-21 02:28:26.499840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.015 [2024-11-21 02:28:26.638792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.272 [2024-11-21 02:28:26.745000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:46.272 [2024-11-21 02:28:26.745476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.273 [2024-11-21 02:28:26.745537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.273 [2024-11-21 02:28:26.745694] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.273 [2024-11-21 02:28:26.745838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.273 [2024-11-21 02:28:26.745949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.273 [2024-11-21 02:28:26.746627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.273 [2024-11-21 02:28:26.746663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.207 02:28:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.207 02:28:27 -- common/autotest_common.sh@862 -- # return 0 00:11:47.207 02:28:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:47.207 02:28:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.207 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.207 02:28:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.207 02:28:27 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:47.207 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.207 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.207 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.207 02:28:27 -- target/rpc.sh@26 -- # stats='{ 00:11:47.207 "poll_groups": [ 00:11:47.207 { 00:11:47.207 "admin_qpairs": 0, 00:11:47.207 "completed_nvme_io": 0, 00:11:47.207 "current_admin_qpairs": 0, 00:11:47.207 "current_io_qpairs": 0, 00:11:47.207 "io_qpairs": 0, 00:11:47.207 "name": "nvmf_tgt_poll_group_0", 00:11:47.207 "pending_bdev_io": 0, 00:11:47.207 "transports": [] 00:11:47.207 }, 00:11:47.207 { 00:11:47.207 "admin_qpairs": 0, 00:11:47.207 "completed_nvme_io": 0, 00:11:47.207 "current_admin_qpairs": 0, 00:11:47.207 "current_io_qpairs": 0, 00:11:47.207 "io_qpairs": 0, 00:11:47.207 "name": "nvmf_tgt_poll_group_1", 00:11:47.207 "pending_bdev_io": 0, 00:11:47.207 "transports": [] 00:11:47.207 }, 00:11:47.207 { 00:11:47.207 "admin_qpairs": 0, 00:11:47.207 "completed_nvme_io": 0, 00:11:47.207 "current_admin_qpairs": 0, 00:11:47.207 "current_io_qpairs": 0, 00:11:47.207 "io_qpairs": 0, 00:11:47.207 "name": "nvmf_tgt_poll_group_2", 00:11:47.207 "pending_bdev_io": 0, 00:11:47.207 "transports": [] 00:11:47.207 }, 00:11:47.207 { 00:11:47.207 "admin_qpairs": 0, 00:11:47.207 "completed_nvme_io": 0, 00:11:47.207 "current_admin_qpairs": 0, 00:11:47.207 "current_io_qpairs": 0, 00:11:47.207 "io_qpairs": 0, 00:11:47.207 "name": "nvmf_tgt_poll_group_3", 00:11:47.207 "pending_bdev_io": 0, 00:11:47.207 "transports": [] 00:11:47.207 } 00:11:47.207 ], 00:11:47.207 "tick_rate": 2200000000 00:11:47.207 }' 00:11:47.207 02:28:27 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:47.207 02:28:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:47.207 02:28:27 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:47.207 02:28:27 -- target/rpc.sh@15 -- # wc -l 00:11:47.207 02:28:27 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:47.207 02:28:27 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:47.207 02:28:27 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:47.207 02:28:27 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.207 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.207 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.207 [2024-11-21 02:28:27.739566] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.207 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.207 02:28:27 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:47.207 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.207 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.208 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.208 02:28:27 -- target/rpc.sh@33 -- # stats='{ 00:11:47.208 "poll_groups": [ 00:11:47.208 { 00:11:47.208 "admin_qpairs": 0, 00:11:47.208 "completed_nvme_io": 0, 00:11:47.208 "current_admin_qpairs": 0, 00:11:47.208 "current_io_qpairs": 0, 00:11:47.208 "io_qpairs": 0, 00:11:47.208 "name": "nvmf_tgt_poll_group_0", 00:11:47.208 "pending_bdev_io": 0, 00:11:47.208 "transports": [ 00:11:47.208 { 00:11:47.208 "trtype": "TCP" 00:11:47.208 } 00:11:47.208 ] 00:11:47.208 }, 00:11:47.208 { 00:11:47.208 "admin_qpairs": 0, 00:11:47.208 "completed_nvme_io": 0, 00:11:47.208 "current_admin_qpairs": 0, 00:11:47.208 "current_io_qpairs": 0, 00:11:47.208 "io_qpairs": 0, 00:11:47.208 "name": "nvmf_tgt_poll_group_1", 00:11:47.208 "pending_bdev_io": 0, 00:11:47.208 "transports": [ 00:11:47.208 { 00:11:47.208 "trtype": "TCP" 00:11:47.208 } 00:11:47.208 ] 00:11:47.208 }, 00:11:47.208 { 00:11:47.208 "admin_qpairs": 0, 00:11:47.208 "completed_nvme_io": 0, 00:11:47.208 "current_admin_qpairs": 0, 00:11:47.208 "current_io_qpairs": 0, 00:11:47.208 "io_qpairs": 0, 00:11:47.208 "name": "nvmf_tgt_poll_group_2", 00:11:47.208 "pending_bdev_io": 0, 00:11:47.208 "transports": [ 00:11:47.208 { 00:11:47.208 "trtype": "TCP" 00:11:47.208 } 00:11:47.208 ] 00:11:47.208 }, 00:11:47.208 { 00:11:47.208 "admin_qpairs": 0, 00:11:47.208 "completed_nvme_io": 0, 00:11:47.208 "current_admin_qpairs": 0, 00:11:47.208 "current_io_qpairs": 0, 00:11:47.208 "io_qpairs": 0, 00:11:47.208 "name": "nvmf_tgt_poll_group_3", 00:11:47.208 "pending_bdev_io": 0, 00:11:47.208 "transports": [ 00:11:47.208 { 00:11:47.208 "trtype": "TCP" 00:11:47.208 } 00:11:47.208 ] 00:11:47.208 } 00:11:47.208 ], 00:11:47.208 "tick_rate": 2200000000 00:11:47.208 }' 00:11:47.208 02:28:27 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:47.208 02:28:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:47.208 02:28:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.208 02:28:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:47.208 02:28:27 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:47.208 02:28:27 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:47.208 02:28:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:47.208 02:28:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:47.208 02:28:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:47.466 02:28:27 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:47.466 02:28:27 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:47.466 02:28:27 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:47.466 02:28:27 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:47.466 02:28:27 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:47.466 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.466 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.466 Malloc1 00:11:47.466 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.466 02:28:27 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.466 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.466 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.466 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.466 02:28:27 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.467 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.467 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.467 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.467 02:28:27 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:47.467 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.467 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.467 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.467 02:28:27 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.467 02:28:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.467 02:28:27 -- common/autotest_common.sh@10 -- # set +x 00:11:47.467 [2024-11-21 02:28:27.963047] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.467 02:28:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.467 02:28:27 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b -a 10.0.0.2 -s 4420 00:11:47.467 02:28:27 -- common/autotest_common.sh@650 -- # local es=0 00:11:47.467 02:28:27 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b -a 10.0.0.2 -s 4420 00:11:47.467 02:28:27 -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:47.467 02:28:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.467 02:28:27 -- common/autotest_common.sh@642 -- # type -t nvme 00:11:47.467 02:28:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.467 02:28:27 -- common/autotest_common.sh@644 -- # type -P nvme 00:11:47.467 02:28:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:47.467 02:28:27 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:47.467 02:28:27 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:47.467 02:28:27 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b -a 10.0.0.2 -s 4420 00:11:47.467 [2024-11-21 02:28:27.995291] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b' 00:11:47.467 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:47.467 could not add new controller: failed to write to nvme-fabrics device 00:11:47.467 02:28:28 -- common/autotest_common.sh@653 -- # es=1 00:11:47.467 02:28:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:47.467 02:28:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:47.467 02:28:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:47.467 02:28:28 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:11:47.467 02:28:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.467 02:28:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.467 02:28:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.467 02:28:28 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.725 02:28:28 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.725 02:28:28 -- common/autotest_common.sh@1187 -- # local i=0 00:11:47.725 02:28:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.725 02:28:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:47.725 02:28:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:49.664 02:28:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:49.664 02:28:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:49.664 02:28:30 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.664 02:28:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:49.664 02:28:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.664 02:28:30 -- common/autotest_common.sh@1197 -- # return 0 00:11:49.664 02:28:30 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.923 02:28:30 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.923 02:28:30 -- common/autotest_common.sh@1208 -- # local i=0 00:11:49.923 02:28:30 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:49.924 02:28:30 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.924 02:28:30 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:49.924 02:28:30 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.924 02:28:30 -- common/autotest_common.sh@1220 -- # return 0 00:11:49.924 02:28:30 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:11:49.924 02:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.924 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.924 02:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.924 02:28:30 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.924 02:28:30 -- common/autotest_common.sh@650 -- # local es=0 00:11:49.924 02:28:30 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.924 02:28:30 -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:49.924 02:28:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.924 02:28:30 -- common/autotest_common.sh@642 -- # type -t nvme 00:11:49.924 02:28:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.924 02:28:30 -- common/autotest_common.sh@644 -- # type -P nvme 00:11:49.924 02:28:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.924 02:28:30 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:49.924 02:28:30 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:49.924 02:28:30 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.924 [2024-11-21 02:28:30.406235] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b' 00:11:49.924 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:49.924 could not add new controller: failed to write to nvme-fabrics device 00:11:49.924 02:28:30 -- common/autotest_common.sh@653 -- # es=1 00:11:49.924 02:28:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:49.924 02:28:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:49.924 02:28:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:49.924 02:28:30 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:49.924 02:28:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.924 02:28:30 -- common/autotest_common.sh@10 -- # set +x 00:11:49.924 02:28:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.924 02:28:30 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.182 02:28:30 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.182 02:28:30 -- common/autotest_common.sh@1187 -- # local i=0 00:11:50.183 02:28:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.183 02:28:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:50.183 02:28:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:52.086 02:28:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:52.086 02:28:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:52.086 02:28:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.087 02:28:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:52.087 02:28:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.087 02:28:32 -- common/autotest_common.sh@1197 -- # return 0 00:11:52.087 02:28:32 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.087 02:28:32 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.087 02:28:32 -- common/autotest_common.sh@1208 -- # local i=0 00:11:52.087 02:28:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:52.087 02:28:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.087 02:28:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:52.087 02:28:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.087 02:28:32 -- common/autotest_common.sh@1220 -- # return 0 00:11:52.087 02:28:32 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.087 02:28:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.087 02:28:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.087 02:28:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.087 02:28:32 -- target/rpc.sh@81 -- # seq 1 5 00:11:52.087 02:28:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.087 02:28:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.087 02:28:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.087 02:28:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.087 02:28:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.087 02:28:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.087 02:28:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.087 02:28:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.087 [2024-11-21 02:28:32.713047] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.087 02:28:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.087 02:28:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.087 02:28:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.087 02:28:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.087 02:28:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.087 02:28:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.087 02:28:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.087 02:28:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.346 02:28:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.346 02:28:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.346 02:28:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.346 02:28:32 -- common/autotest_common.sh@1187 -- # local i=0 00:11:52.346 02:28:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.346 02:28:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:52.346 02:28:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:54.878 02:28:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:54.878 02:28:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:54.878 02:28:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.878 02:28:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:54.878 02:28:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.878 02:28:34 -- common/autotest_common.sh@1197 -- # return 0 00:11:54.878 02:28:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.878 02:28:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.878 02:28:34 -- common/autotest_common.sh@1208 -- # local i=0 00:11:54.878 02:28:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:54.878 02:28:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.878 02:28:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.878 02:28:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:54.878 02:28:34 -- common/autotest_common.sh@1220 -- # return 0 00:11:54.878 02:28:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.878 02:28:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.878 02:28:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.878 02:28:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.878 02:28:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.878 02:28:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.878 02:28:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.878 02:28:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.878 02:28:34 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.878 02:28:34 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.878 02:28:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.878 02:28:35 -- common/autotest_common.sh@10 -- # set +x 00:11:54.878 02:28:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.878 02:28:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.878 02:28:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.878 02:28:35 -- common/autotest_common.sh@10 -- # set +x 00:11:54.878 [2024-11-21 02:28:35.011733] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.878 02:28:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.878 02:28:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.879 02:28:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.879 02:28:35 -- common/autotest_common.sh@10 -- # set +x 00:11:54.879 02:28:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.879 02:28:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.879 02:28:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.879 02:28:35 -- common/autotest_common.sh@10 -- # set +x 00:11:54.879 02:28:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.879 02:28:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.879 02:28:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.879 02:28:35 -- common/autotest_common.sh@1187 -- # local i=0 00:11:54.879 02:28:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.879 02:28:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:54.879 02:28:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:56.782 02:28:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:56.782 02:28:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:56.782 02:28:37 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.782 02:28:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:56.782 02:28:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.782 02:28:37 -- common/autotest_common.sh@1197 -- # return 0 00:11:56.782 02:28:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.782 02:28:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.782 02:28:37 -- common/autotest_common.sh@1208 -- # local i=0 00:11:56.782 02:28:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:56.782 02:28:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.782 02:28:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:56.782 02:28:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.782 02:28:37 -- common/autotest_common.sh@1220 -- # return 0 00:11:56.782 02:28:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.782 02:28:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.782 02:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 02:28:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.782 02:28:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.782 02:28:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.782 02:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 02:28:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.782 02:28:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.782 02:28:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.782 02:28:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.782 02:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 02:28:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.782 02:28:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.782 02:28:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.782 02:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 [2024-11-21 02:28:37.310621] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.782 02:28:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.782 02:28:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.782 02:28:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.782 02:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 02:28:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.782 02:28:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.782 02:28:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.782 02:28:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.782 02:28:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.782 02:28:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.041 02:28:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.041 02:28:37 -- common/autotest_common.sh@1187 -- # local i=0 00:11:57.041 02:28:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.041 02:28:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:57.041 02:28:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:58.944 02:28:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:58.944 02:28:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:58.944 02:28:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.944 02:28:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:58.944 02:28:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.944 02:28:39 -- common/autotest_common.sh@1197 -- # return 0 00:11:58.944 02:28:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.944 02:28:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.944 02:28:39 -- common/autotest_common.sh@1208 -- # local i=0 00:11:58.944 02:28:39 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:58.944 02:28:39 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.944 02:28:39 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:58.944 02:28:39 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.203 02:28:39 -- common/autotest_common.sh@1220 -- # return 0 00:11:59.203 02:28:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.203 02:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.203 02:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.203 02:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.203 02:28:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.203 02:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.203 02:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.203 02:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.203 02:28:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.203 02:28:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.203 02:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.203 02:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.203 02:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.203 02:28:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.203 02:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.203 02:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.203 [2024-11-21 02:28:39.621329] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.203 02:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.203 02:28:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.203 02:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.203 02:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.203 02:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.204 02:28:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.204 02:28:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.204 02:28:39 -- common/autotest_common.sh@10 -- # set +x 00:11:59.204 02:28:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.204 02:28:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.204 02:28:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.204 02:28:39 -- common/autotest_common.sh@1187 -- # local i=0 00:11:59.204 02:28:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.204 02:28:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:59.204 02:28:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:01.737 02:28:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:01.737 02:28:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:01.737 02:28:41 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.737 02:28:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:01.737 02:28:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.737 02:28:41 -- common/autotest_common.sh@1197 -- # return 0 00:12:01.737 02:28:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.737 02:28:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.737 02:28:41 -- common/autotest_common.sh@1208 -- # local i=0 00:12:01.737 02:28:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:01.737 02:28:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.737 02:28:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:01.737 02:28:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.737 02:28:42 -- common/autotest_common.sh@1220 -- # return 0 00:12:01.737 02:28:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.737 02:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.737 02:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.737 02:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.737 02:28:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.737 02:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.737 02:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.737 02:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.737 02:28:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:01.737 02:28:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.737 02:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.737 02:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.737 02:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.737 02:28:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.737 02:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.737 02:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.737 [2024-11-21 02:28:42.032289] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.737 02:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.737 02:28:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:01.737 02:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.737 02:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.737 02:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.738 02:28:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.738 02:28:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.738 02:28:42 -- common/autotest_common.sh@10 -- # set +x 00:12:01.738 02:28:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.738 02:28:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.738 02:28:42 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.738 02:28:42 -- common/autotest_common.sh@1187 -- # local i=0 00:12:01.738 02:28:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.738 02:28:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:01.738 02:28:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:03.639 02:28:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:03.640 02:28:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:03.640 02:28:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.640 02:28:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:03.640 02:28:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.640 02:28:44 -- common/autotest_common.sh@1197 -- # return 0 00:12:03.640 02:28:44 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.898 02:28:44 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.898 02:28:44 -- common/autotest_common.sh@1208 -- # local i=0 00:12:03.898 02:28:44 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:03.898 02:28:44 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.898 02:28:44 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:03.898 02:28:44 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.898 02:28:44 -- common/autotest_common.sh@1220 -- # return 0 00:12:03.898 02:28:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:03.898 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.898 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.898 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.898 02:28:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.898 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.898 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.898 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.898 02:28:44 -- target/rpc.sh@99 -- # seq 1 5 00:12:03.898 02:28:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.898 02:28:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.898 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.898 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.898 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.898 02:28:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.898 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.898 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.898 [2024-11-21 02:28:44.451211] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.898 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.898 02:28:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.898 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.898 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.898 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.898 02:28:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.898 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.898 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.899 02:28:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 [2024-11-21 02:28:44.499236] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.899 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.899 02:28:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:03.899 02:28:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.899 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.899 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 [2024-11-21 02:28:44.551293] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.158 02:28:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 [2024-11-21 02:28:44.599354] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:04.158 02:28:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.158 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.158 [2024-11-21 02:28:44.647387] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.158 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.158 02:28:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.158 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 02:28:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:04.159 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 02:28:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.159 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 02:28:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.159 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 02:28:44 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:04.159 02:28:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.159 02:28:44 -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 02:28:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.159 02:28:44 -- target/rpc.sh@110 -- # stats='{ 00:12:04.159 "poll_groups": [ 00:12:04.159 { 00:12:04.159 "admin_qpairs": 2, 00:12:04.159 "completed_nvme_io": 165, 00:12:04.159 "current_admin_qpairs": 0, 00:12:04.159 "current_io_qpairs": 0, 00:12:04.159 "io_qpairs": 16, 00:12:04.159 "name": "nvmf_tgt_poll_group_0", 00:12:04.159 "pending_bdev_io": 0, 00:12:04.159 "transports": [ 00:12:04.159 { 00:12:04.159 "trtype": "TCP" 00:12:04.159 } 00:12:04.159 ] 00:12:04.159 }, 00:12:04.159 { 00:12:04.159 "admin_qpairs": 3, 00:12:04.159 "completed_nvme_io": 70, 00:12:04.159 "current_admin_qpairs": 0, 00:12:04.159 "current_io_qpairs": 0, 00:12:04.159 "io_qpairs": 17, 00:12:04.159 "name": "nvmf_tgt_poll_group_1", 00:12:04.159 "pending_bdev_io": 0, 00:12:04.159 "transports": [ 00:12:04.159 { 00:12:04.159 "trtype": "TCP" 00:12:04.159 } 00:12:04.159 ] 00:12:04.159 }, 00:12:04.159 { 00:12:04.159 "admin_qpairs": 1, 00:12:04.159 "completed_nvme_io": 69, 00:12:04.159 "current_admin_qpairs": 0, 00:12:04.159 "current_io_qpairs": 0, 00:12:04.159 "io_qpairs": 19, 00:12:04.159 "name": "nvmf_tgt_poll_group_2", 00:12:04.159 "pending_bdev_io": 0, 00:12:04.159 "transports": [ 00:12:04.159 { 00:12:04.159 "trtype": "TCP" 00:12:04.159 } 00:12:04.159 ] 00:12:04.159 }, 00:12:04.159 { 00:12:04.159 "admin_qpairs": 1, 00:12:04.159 "completed_nvme_io": 116, 00:12:04.159 "current_admin_qpairs": 0, 00:12:04.159 "current_io_qpairs": 0, 00:12:04.159 "io_qpairs": 18, 00:12:04.159 "name": "nvmf_tgt_poll_group_3", 00:12:04.159 "pending_bdev_io": 0, 00:12:04.159 "transports": [ 00:12:04.159 { 00:12:04.159 "trtype": "TCP" 00:12:04.159 } 00:12:04.159 ] 00:12:04.159 } 00:12:04.159 ], 00:12:04.159 "tick_rate": 2200000000 00:12:04.159 }' 00:12:04.159 02:28:44 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:04.159 02:28:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:04.159 02:28:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:04.159 02:28:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:04.159 02:28:44 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:04.159 02:28:44 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:04.159 02:28:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:04.159 02:28:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:04.159 02:28:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:04.418 02:28:44 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:04.418 02:28:44 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:04.418 02:28:44 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:04.418 02:28:44 -- target/rpc.sh@123 -- # nvmftestfini 00:12:04.418 02:28:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:04.418 02:28:44 -- nvmf/common.sh@116 -- # sync 00:12:04.418 02:28:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:04.418 02:28:44 -- nvmf/common.sh@119 -- # set +e 00:12:04.418 02:28:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:04.418 02:28:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:04.418 rmmod nvme_tcp 00:12:04.418 rmmod nvme_fabrics 00:12:04.418 rmmod nvme_keyring 00:12:04.418 02:28:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:04.418 02:28:44 -- nvmf/common.sh@123 -- # set -e 00:12:04.418 02:28:44 -- nvmf/common.sh@124 -- # return 0 00:12:04.418 02:28:44 -- nvmf/common.sh@477 -- # '[' -n 66090 ']' 00:12:04.418 02:28:44 -- nvmf/common.sh@478 -- # killprocess 66090 00:12:04.418 02:28:44 -- common/autotest_common.sh@936 -- # '[' -z 66090 ']' 00:12:04.418 02:28:44 -- common/autotest_common.sh@940 -- # kill -0 66090 00:12:04.418 02:28:44 -- common/autotest_common.sh@941 -- # uname 00:12:04.418 02:28:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:04.418 02:28:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66090 00:12:04.418 02:28:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:04.418 02:28:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:04.418 killing process with pid 66090 00:12:04.418 02:28:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66090' 00:12:04.418 02:28:44 -- common/autotest_common.sh@955 -- # kill 66090 00:12:04.418 02:28:44 -- common/autotest_common.sh@960 -- # wait 66090 00:12:04.676 02:28:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:04.676 02:28:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:04.676 02:28:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:04.676 02:28:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.676 02:28:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:04.676 02:28:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.676 02:28:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.676 02:28:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.936 02:28:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:04.936 00:12:04.936 real 0m19.473s 00:12:04.936 user 1m12.969s 00:12:04.936 sys 0m2.574s 00:12:04.936 02:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:04.936 02:28:45 -- common/autotest_common.sh@10 -- # set +x 00:12:04.936 ************************************ 00:12:04.936 END TEST nvmf_rpc 00:12:04.936 ************************************ 00:12:04.936 02:28:45 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:04.936 02:28:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:04.936 02:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.936 02:28:45 -- common/autotest_common.sh@10 -- # set +x 00:12:04.936 ************************************ 00:12:04.936 START TEST nvmf_invalid 00:12:04.936 ************************************ 00:12:04.936 02:28:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:04.936 * Looking for test storage... 00:12:04.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.936 02:28:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:04.936 02:28:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:04.936 02:28:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:04.936 02:28:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:04.936 02:28:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:04.936 02:28:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:04.936 02:28:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:04.936 02:28:45 -- scripts/common.sh@335 -- # IFS=.-: 00:12:04.936 02:28:45 -- scripts/common.sh@335 -- # read -ra ver1 00:12:04.936 02:28:45 -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.936 02:28:45 -- scripts/common.sh@336 -- # read -ra ver2 00:12:04.936 02:28:45 -- scripts/common.sh@337 -- # local 'op=<' 00:12:04.936 02:28:45 -- scripts/common.sh@339 -- # ver1_l=2 00:12:04.936 02:28:45 -- scripts/common.sh@340 -- # ver2_l=1 00:12:04.936 02:28:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:04.936 02:28:45 -- scripts/common.sh@343 -- # case "$op" in 00:12:04.936 02:28:45 -- scripts/common.sh@344 -- # : 1 00:12:04.936 02:28:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:04.936 02:28:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.936 02:28:45 -- scripts/common.sh@364 -- # decimal 1 00:12:04.936 02:28:45 -- scripts/common.sh@352 -- # local d=1 00:12:04.936 02:28:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.936 02:28:45 -- scripts/common.sh@354 -- # echo 1 00:12:04.936 02:28:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:04.936 02:28:45 -- scripts/common.sh@365 -- # decimal 2 00:12:04.936 02:28:45 -- scripts/common.sh@352 -- # local d=2 00:12:04.936 02:28:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.936 02:28:45 -- scripts/common.sh@354 -- # echo 2 00:12:04.936 02:28:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:04.936 02:28:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:04.936 02:28:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:04.936 02:28:45 -- scripts/common.sh@367 -- # return 0 00:12:04.936 02:28:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.936 02:28:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.936 --rc genhtml_branch_coverage=1 00:12:04.936 --rc genhtml_function_coverage=1 00:12:04.936 --rc genhtml_legend=1 00:12:04.936 --rc geninfo_all_blocks=1 00:12:04.936 --rc geninfo_unexecuted_blocks=1 00:12:04.936 00:12:04.936 ' 00:12:04.936 02:28:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.936 --rc genhtml_branch_coverage=1 00:12:04.936 --rc genhtml_function_coverage=1 00:12:04.936 --rc genhtml_legend=1 00:12:04.936 --rc geninfo_all_blocks=1 00:12:04.936 --rc geninfo_unexecuted_blocks=1 00:12:04.936 00:12:04.936 ' 00:12:04.936 02:28:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.936 --rc genhtml_branch_coverage=1 00:12:04.936 --rc genhtml_function_coverage=1 00:12:04.936 --rc genhtml_legend=1 00:12:04.936 --rc geninfo_all_blocks=1 00:12:04.937 --rc geninfo_unexecuted_blocks=1 00:12:04.937 00:12:04.937 ' 00:12:04.937 02:28:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:04.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.937 --rc genhtml_branch_coverage=1 00:12:04.937 --rc genhtml_function_coverage=1 00:12:04.937 --rc genhtml_legend=1 00:12:04.937 --rc geninfo_all_blocks=1 00:12:04.937 --rc geninfo_unexecuted_blocks=1 00:12:04.937 00:12:04.937 ' 00:12:04.937 02:28:45 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.937 02:28:45 -- nvmf/common.sh@7 -- # uname -s 00:12:04.937 02:28:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.937 02:28:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.937 02:28:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.937 02:28:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.937 02:28:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.937 02:28:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.937 02:28:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.937 02:28:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.937 02:28:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.937 02:28:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.937 02:28:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:04.937 02:28:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:04.937 02:28:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.937 02:28:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.937 02:28:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.937 02:28:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.937 02:28:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.937 02:28:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.937 02:28:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.937 02:28:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.937 02:28:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.937 02:28:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.937 02:28:45 -- paths/export.sh@5 -- # export PATH 00:12:04.937 02:28:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.937 02:28:45 -- nvmf/common.sh@46 -- # : 0 00:12:04.937 02:28:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:04.937 02:28:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:04.937 02:28:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:04.937 02:28:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.937 02:28:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.937 02:28:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:04.937 02:28:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:04.937 02:28:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:04.937 02:28:45 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.937 02:28:45 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.937 02:28:45 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:04.937 02:28:45 -- target/invalid.sh@14 -- # target=foobar 00:12:04.937 02:28:45 -- target/invalid.sh@16 -- # RANDOM=0 00:12:04.937 02:28:45 -- target/invalid.sh@34 -- # nvmftestinit 00:12:04.937 02:28:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:04.937 02:28:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.937 02:28:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:04.937 02:28:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:04.937 02:28:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:04.937 02:28:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.937 02:28:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.937 02:28:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.937 02:28:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:04.937 02:28:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:04.937 02:28:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:04.937 02:28:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:04.937 02:28:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:04.937 02:28:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:04.937 02:28:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.937 02:28:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.937 02:28:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:04.937 02:28:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:04.937 02:28:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.937 02:28:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.937 02:28:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.937 02:28:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.937 02:28:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.937 02:28:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.937 02:28:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.937 02:28:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.937 02:28:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:05.197 02:28:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:05.197 Cannot find device "nvmf_tgt_br" 00:12:05.197 02:28:45 -- nvmf/common.sh@154 -- # true 00:12:05.197 02:28:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.197 Cannot find device "nvmf_tgt_br2" 00:12:05.197 02:28:45 -- nvmf/common.sh@155 -- # true 00:12:05.197 02:28:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:05.197 02:28:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:05.197 Cannot find device "nvmf_tgt_br" 00:12:05.197 02:28:45 -- nvmf/common.sh@157 -- # true 00:12:05.197 02:28:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:05.197 Cannot find device "nvmf_tgt_br2" 00:12:05.197 02:28:45 -- nvmf/common.sh@158 -- # true 00:12:05.197 02:28:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:05.197 02:28:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:05.197 02:28:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.197 02:28:45 -- nvmf/common.sh@161 -- # true 00:12:05.197 02:28:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.197 02:28:45 -- nvmf/common.sh@162 -- # true 00:12:05.197 02:28:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.197 02:28:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.197 02:28:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.197 02:28:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.197 02:28:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.197 02:28:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.197 02:28:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.197 02:28:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:05.197 02:28:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:05.197 02:28:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:05.197 02:28:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:05.197 02:28:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:05.197 02:28:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:05.197 02:28:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.197 02:28:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.197 02:28:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.197 02:28:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:05.197 02:28:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:05.197 02:28:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.456 02:28:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.456 02:28:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.456 02:28:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.456 02:28:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.456 02:28:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:05.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:05.456 00:12:05.456 --- 10.0.0.2 ping statistics --- 00:12:05.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.456 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:05.456 02:28:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:05.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:12:05.456 00:12:05.456 --- 10.0.0.3 ping statistics --- 00:12:05.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.456 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:12:05.456 02:28:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:05.456 00:12:05.456 --- 10.0.0.1 ping statistics --- 00:12:05.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.456 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:05.456 02:28:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.456 02:28:45 -- nvmf/common.sh@421 -- # return 0 00:12:05.456 02:28:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:05.456 02:28:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.456 02:28:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:05.456 02:28:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:05.456 02:28:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.456 02:28:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:05.456 02:28:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:05.456 02:28:45 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:05.456 02:28:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:05.456 02:28:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.456 02:28:45 -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 02:28:45 -- nvmf/common.sh@469 -- # nvmfpid=66614 00:12:05.456 02:28:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.456 02:28:45 -- nvmf/common.sh@470 -- # waitforlisten 66614 00:12:05.456 02:28:45 -- common/autotest_common.sh@829 -- # '[' -z 66614 ']' 00:12:05.456 02:28:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.456 02:28:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.456 02:28:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.456 02:28:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.456 02:28:45 -- common/autotest_common.sh@10 -- # set +x 00:12:05.456 [2024-11-21 02:28:45.986581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:05.456 [2024-11-21 02:28:45.986670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.715 [2024-11-21 02:28:46.125893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.715 [2024-11-21 02:28:46.224976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.715 [2024-11-21 02:28:46.225114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.715 [2024-11-21 02:28:46.225127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.715 [2024-11-21 02:28:46.225135] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.715 [2024-11-21 02:28:46.225267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.715 [2024-11-21 02:28:46.225681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.715 [2024-11-21 02:28:46.226271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.715 [2024-11-21 02:28:46.226282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.282 02:28:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:06.282 02:28:46 -- common/autotest_common.sh@862 -- # return 0 00:12:06.282 02:28:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:06.282 02:28:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:06.282 02:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:06.540 02:28:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.541 02:28:46 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:06.541 02:28:46 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1265 00:12:06.541 [2024-11-21 02:28:47.142665] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:06.541 02:28:47 -- target/invalid.sh@40 -- # out='2024/11/21 02:28:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1265 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:06.541 request: 00:12:06.541 { 00:12:06.541 "method": "nvmf_create_subsystem", 00:12:06.541 "params": { 00:12:06.541 "nqn": "nqn.2016-06.io.spdk:cnode1265", 00:12:06.541 "tgt_name": "foobar" 00:12:06.541 } 00:12:06.541 } 00:12:06.541 Got JSON-RPC error response 00:12:06.541 GoRPCClient: error on JSON-RPC call' 00:12:06.541 02:28:47 -- target/invalid.sh@41 -- # [[ 2024/11/21 02:28:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1265 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:06.541 request: 00:12:06.541 { 00:12:06.541 "method": "nvmf_create_subsystem", 00:12:06.541 "params": { 00:12:06.541 "nqn": "nqn.2016-06.io.spdk:cnode1265", 00:12:06.541 "tgt_name": "foobar" 00:12:06.541 } 00:12:06.541 } 00:12:06.541 Got JSON-RPC error response 00:12:06.541 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:06.541 02:28:47 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:06.541 02:28:47 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15913 00:12:06.800 [2024-11-21 02:28:47.438945] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15913: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:07.059 02:28:47 -- target/invalid.sh@45 -- # out='2024/11/21 02:28:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15913 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:07.059 request: 00:12:07.059 { 00:12:07.059 "method": "nvmf_create_subsystem", 00:12:07.059 "params": { 00:12:07.059 "nqn": "nqn.2016-06.io.spdk:cnode15913", 00:12:07.059 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:07.059 } 00:12:07.059 } 00:12:07.059 Got JSON-RPC error response 00:12:07.059 GoRPCClient: error on JSON-RPC call' 00:12:07.059 02:28:47 -- target/invalid.sh@46 -- # [[ 2024/11/21 02:28:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15913 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:07.059 request: 00:12:07.059 { 00:12:07.059 "method": "nvmf_create_subsystem", 00:12:07.059 "params": { 00:12:07.059 "nqn": "nqn.2016-06.io.spdk:cnode15913", 00:12:07.059 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:07.059 } 00:12:07.059 } 00:12:07.059 Got JSON-RPC error response 00:12:07.059 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.059 02:28:47 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:07.059 02:28:47 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14034 00:12:07.060 [2024-11-21 02:28:47.655148] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14034: invalid model number 'SPDK_Controller' 00:12:07.060 02:28:47 -- target/invalid.sh@50 -- # out='2024/11/21 02:28:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode14034], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:07.060 request: 00:12:07.060 { 00:12:07.060 "method": "nvmf_create_subsystem", 00:12:07.060 "params": { 00:12:07.060 "nqn": "nqn.2016-06.io.spdk:cnode14034", 00:12:07.060 "model_number": "SPDK_Controller\u001f" 00:12:07.060 } 00:12:07.060 } 00:12:07.060 Got JSON-RPC error response 00:12:07.060 GoRPCClient: error on JSON-RPC call' 00:12:07.060 02:28:47 -- target/invalid.sh@51 -- # [[ 2024/11/21 02:28:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode14034], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:07.060 request: 00:12:07.060 { 00:12:07.060 "method": "nvmf_create_subsystem", 00:12:07.060 "params": { 00:12:07.060 "nqn": "nqn.2016-06.io.spdk:cnode14034", 00:12:07.060 "model_number": "SPDK_Controller\u001f" 00:12:07.060 } 00:12:07.060 } 00:12:07.060 Got JSON-RPC error response 00:12:07.060 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.060 02:28:47 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:07.060 02:28:47 -- target/invalid.sh@19 -- # local length=21 ll 00:12:07.060 02:28:47 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.060 02:28:47 -- target/invalid.sh@21 -- # local chars 00:12:07.060 02:28:47 -- target/invalid.sh@22 -- # local string 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # printf %x 52 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # string+=4 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # printf %x 120 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # string+=x 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # printf %x 88 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # string+=X 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # printf %x 102 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:07.060 02:28:47 -- target/invalid.sh@25 -- # string+=f 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.060 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # printf %x 123 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # string+='{' 00:12:07.319 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.319 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # printf %x 99 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # string+=c 00:12:07.319 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.319 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # printf %x 118 00:12:07.319 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=v 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 106 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=j 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 93 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=']' 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 86 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=V 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 37 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=% 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 36 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+='$' 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 115 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=s 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 76 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=L 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 33 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+='!' 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 71 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=G 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 69 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=E 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 68 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=D 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 77 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=M 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 50 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=2 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # printf %x 107 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:07.320 02:28:47 -- target/invalid.sh@25 -- # string+=k 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.320 02:28:47 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.320 02:28:47 -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:12:07.320 02:28:47 -- target/invalid.sh@31 -- # echo '4xXf{cvj]V%$sL!GEDM2k' 00:12:07.320 02:28:47 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '4xXf{cvj]V%$sL!GEDM2k' nqn.2016-06.io.spdk:cnode19612 00:12:07.579 [2024-11-21 02:28:48.027533] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19612: invalid serial number '4xXf{cvj]V%$sL!GEDM2k' 00:12:07.580 02:28:48 -- target/invalid.sh@54 -- # out='2024/11/21 02:28:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19612 serial_number:4xXf{cvj]V%$sL!GEDM2k], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 4xXf{cvj]V%$sL!GEDM2k 00:12:07.580 request: 00:12:07.580 { 00:12:07.580 "method": "nvmf_create_subsystem", 00:12:07.580 "params": { 00:12:07.580 "nqn": "nqn.2016-06.io.spdk:cnode19612", 00:12:07.580 "serial_number": "4xXf{cvj]V%$sL!GEDM2k" 00:12:07.580 } 00:12:07.580 } 00:12:07.580 Got JSON-RPC error response 00:12:07.580 GoRPCClient: error on JSON-RPC call' 00:12:07.580 02:28:48 -- target/invalid.sh@55 -- # [[ 2024/11/21 02:28:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19612 serial_number:4xXf{cvj]V%$sL!GEDM2k], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 4xXf{cvj]V%$sL!GEDM2k 00:12:07.580 request: 00:12:07.580 { 00:12:07.580 "method": "nvmf_create_subsystem", 00:12:07.580 "params": { 00:12:07.580 "nqn": "nqn.2016-06.io.spdk:cnode19612", 00:12:07.580 "serial_number": "4xXf{cvj]V%$sL!GEDM2k" 00:12:07.580 } 00:12:07.580 } 00:12:07.580 Got JSON-RPC error response 00:12:07.580 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.580 02:28:48 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:07.580 02:28:48 -- target/invalid.sh@19 -- # local length=41 ll 00:12:07.580 02:28:48 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.580 02:28:48 -- target/invalid.sh@21 -- # local chars 00:12:07.580 02:28:48 -- target/invalid.sh@22 -- # local string 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 88 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=X 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 112 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=p 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 53 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=5 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 69 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=E 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 95 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=_ 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 42 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+='*' 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 77 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=M 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 50 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=2 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 53 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=5 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 47 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=/ 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 34 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+='"' 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 115 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=s 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 37 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=% 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 97 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=a 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 74 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=J 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 87 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=W 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 86 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=V 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 99 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=c 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 88 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=X 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 102 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=f 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 61 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+== 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 118 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=v 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 126 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+='~' 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 68 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=D 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 33 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+='!' 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 59 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+=';' 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.580 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # printf %x 123 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:07.580 02:28:48 -- target/invalid.sh@25 -- # string+='{' 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 65 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=A 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 69 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=E 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 93 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=']' 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 122 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=z 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 69 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=E 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 116 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=t 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 71 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=G 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 44 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=, 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 69 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=E 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 66 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=B 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # printf %x 99 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:07.581 02:28:48 -- target/invalid.sh@25 -- # string+=c 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.581 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # printf %x 111 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # string+=o 00:12:07.839 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.839 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # printf %x 86 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # string+=V 00:12:07.839 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.839 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # printf %x 83 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:07.839 02:28:48 -- target/invalid.sh@25 -- # string+=S 00:12:07.839 02:28:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.839 02:28:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.839 02:28:48 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:12:07.839 02:28:48 -- target/invalid.sh@31 -- # echo 'Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS' 00:12:07.839 02:28:48 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS' nqn.2016-06.io.spdk:cnode26008 00:12:07.839 [2024-11-21 02:28:48.427901] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26008: invalid model number 'Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS' 00:12:07.839 02:28:48 -- target/invalid.sh@58 -- # out='2024/11/21 02:28:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS nqn:nqn.2016-06.io.spdk:cnode26008], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS 00:12:07.839 request: 00:12:07.839 { 00:12:07.839 "method": "nvmf_create_subsystem", 00:12:07.839 "params": { 00:12:07.839 "nqn": "nqn.2016-06.io.spdk:cnode26008", 00:12:07.839 "model_number": "Xp5E_*M25/\"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS" 00:12:07.839 } 00:12:07.839 } 00:12:07.839 Got JSON-RPC error response 00:12:07.839 GoRPCClient: error on JSON-RPC call' 00:12:07.839 02:28:48 -- target/invalid.sh@59 -- # [[ 2024/11/21 02:28:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS nqn:nqn.2016-06.io.spdk:cnode26008], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN Xp5E_*M25/"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS 00:12:07.839 request: 00:12:07.839 { 00:12:07.839 "method": "nvmf_create_subsystem", 00:12:07.839 "params": { 00:12:07.839 "nqn": "nqn.2016-06.io.spdk:cnode26008", 00:12:07.839 "model_number": "Xp5E_*M25/\"s%aJWVcXf=v~D!;{AE]zEtG,EBcoVS" 00:12:07.839 } 00:12:07.839 } 00:12:07.839 Got JSON-RPC error response 00:12:07.839 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.839 02:28:48 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:08.098 [2024-11-21 02:28:48.628130] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.098 02:28:48 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:08.355 02:28:48 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:08.355 02:28:48 -- target/invalid.sh@67 -- # echo '' 00:12:08.355 02:28:48 -- target/invalid.sh@67 -- # head -n 1 00:12:08.355 02:28:48 -- target/invalid.sh@67 -- # IP= 00:12:08.355 02:28:48 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:08.615 [2024-11-21 02:28:49.191537] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:08.615 02:28:49 -- target/invalid.sh@69 -- # out='2024/11/21 02:28:49 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:08.615 request: 00:12:08.615 { 00:12:08.615 "method": "nvmf_subsystem_remove_listener", 00:12:08.615 "params": { 00:12:08.615 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:08.615 "listen_address": { 00:12:08.615 "trtype": "tcp", 00:12:08.615 "traddr": "", 00:12:08.615 "trsvcid": "4421" 00:12:08.615 } 00:12:08.615 } 00:12:08.615 } 00:12:08.615 Got JSON-RPC error response 00:12:08.615 GoRPCClient: error on JSON-RPC call' 00:12:08.615 02:28:49 -- target/invalid.sh@70 -- # [[ 2024/11/21 02:28:49 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:08.615 request: 00:12:08.615 { 00:12:08.615 "method": "nvmf_subsystem_remove_listener", 00:12:08.615 "params": { 00:12:08.615 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:08.615 "listen_address": { 00:12:08.615 "trtype": "tcp", 00:12:08.615 "traddr": "", 00:12:08.615 "trsvcid": "4421" 00:12:08.615 } 00:12:08.615 } 00:12:08.615 } 00:12:08.615 Got JSON-RPC error response 00:12:08.615 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:08.615 02:28:49 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18976 -i 0 00:12:08.873 [2024-11-21 02:28:49.479884] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18976: invalid cntlid range [0-65519] 00:12:08.873 02:28:49 -- target/invalid.sh@73 -- # out='2024/11/21 02:28:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode18976], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:08.873 request: 00:12:08.873 { 00:12:08.873 "method": "nvmf_create_subsystem", 00:12:08.873 "params": { 00:12:08.873 "nqn": "nqn.2016-06.io.spdk:cnode18976", 00:12:08.873 "min_cntlid": 0 00:12:08.873 } 00:12:08.873 } 00:12:08.873 Got JSON-RPC error response 00:12:08.873 GoRPCClient: error on JSON-RPC call' 00:12:08.873 02:28:49 -- target/invalid.sh@74 -- # [[ 2024/11/21 02:28:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode18976], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:08.873 request: 00:12:08.873 { 00:12:08.873 "method": "nvmf_create_subsystem", 00:12:08.873 "params": { 00:12:08.873 "nqn": "nqn.2016-06.io.spdk:cnode18976", 00:12:08.873 "min_cntlid": 0 00:12:08.873 } 00:12:08.873 } 00:12:08.873 Got JSON-RPC error response 00:12:08.873 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:08.873 02:28:49 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21869 -i 65520 00:12:09.439 [2024-11-21 02:28:49.784315] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21869: invalid cntlid range [65520-65519] 00:12:09.439 02:28:49 -- target/invalid.sh@75 -- # out='2024/11/21 02:28:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21869], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:09.439 request: 00:12:09.439 { 00:12:09.439 "method": "nvmf_create_subsystem", 00:12:09.439 "params": { 00:12:09.439 "nqn": "nqn.2016-06.io.spdk:cnode21869", 00:12:09.439 "min_cntlid": 65520 00:12:09.439 } 00:12:09.439 } 00:12:09.439 Got JSON-RPC error response 00:12:09.439 GoRPCClient: error on JSON-RPC call' 00:12:09.439 02:28:49 -- target/invalid.sh@76 -- # [[ 2024/11/21 02:28:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21869], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:09.439 request: 00:12:09.439 { 00:12:09.439 "method": "nvmf_create_subsystem", 00:12:09.439 "params": { 00:12:09.439 "nqn": "nqn.2016-06.io.spdk:cnode21869", 00:12:09.439 "min_cntlid": 65520 00:12:09.439 } 00:12:09.439 } 00:12:09.439 Got JSON-RPC error response 00:12:09.439 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.439 02:28:49 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32422 -I 0 00:12:09.697 [2024-11-21 02:28:50.108931] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32422: invalid cntlid range [1-0] 00:12:09.697 02:28:50 -- target/invalid.sh@77 -- # out='2024/11/21 02:28:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32422], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:09.697 request: 00:12:09.697 { 00:12:09.697 "method": "nvmf_create_subsystem", 00:12:09.697 "params": { 00:12:09.697 "nqn": "nqn.2016-06.io.spdk:cnode32422", 00:12:09.697 "max_cntlid": 0 00:12:09.697 } 00:12:09.697 } 00:12:09.697 Got JSON-RPC error response 00:12:09.697 GoRPCClient: error on JSON-RPC call' 00:12:09.697 02:28:50 -- target/invalid.sh@78 -- # [[ 2024/11/21 02:28:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode32422], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:09.697 request: 00:12:09.697 { 00:12:09.697 "method": "nvmf_create_subsystem", 00:12:09.697 "params": { 00:12:09.697 "nqn": "nqn.2016-06.io.spdk:cnode32422", 00:12:09.697 "max_cntlid": 0 00:12:09.697 } 00:12:09.697 } 00:12:09.697 Got JSON-RPC error response 00:12:09.697 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.697 02:28:50 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31571 -I 65520 00:12:09.955 [2024-11-21 02:28:50.417449] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31571: invalid cntlid range [1-65520] 00:12:09.955 02:28:50 -- target/invalid.sh@79 -- # out='2024/11/21 02:28:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31571], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:09.955 request: 00:12:09.955 { 00:12:09.955 "method": "nvmf_create_subsystem", 00:12:09.955 "params": { 00:12:09.955 "nqn": "nqn.2016-06.io.spdk:cnode31571", 00:12:09.955 "max_cntlid": 65520 00:12:09.955 } 00:12:09.955 } 00:12:09.955 Got JSON-RPC error response 00:12:09.955 GoRPCClient: error on JSON-RPC call' 00:12:09.955 02:28:50 -- target/invalid.sh@80 -- # [[ 2024/11/21 02:28:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31571], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:09.955 request: 00:12:09.955 { 00:12:09.955 "method": "nvmf_create_subsystem", 00:12:09.955 "params": { 00:12:09.955 "nqn": "nqn.2016-06.io.spdk:cnode31571", 00:12:09.955 "max_cntlid": 65520 00:12:09.955 } 00:12:09.955 } 00:12:09.955 Got JSON-RPC error response 00:12:09.955 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.955 02:28:50 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2183 -i 6 -I 5 00:12:10.212 [2024-11-21 02:28:50.765784] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2183: invalid cntlid range [6-5] 00:12:10.212 02:28:50 -- target/invalid.sh@83 -- # out='2024/11/21 02:28:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode2183], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:10.212 request: 00:12:10.212 { 00:12:10.212 "method": "nvmf_create_subsystem", 00:12:10.212 "params": { 00:12:10.212 "nqn": "nqn.2016-06.io.spdk:cnode2183", 00:12:10.212 "min_cntlid": 6, 00:12:10.212 "max_cntlid": 5 00:12:10.212 } 00:12:10.212 } 00:12:10.212 Got JSON-RPC error response 00:12:10.212 GoRPCClient: error on JSON-RPC call' 00:12:10.212 02:28:50 -- target/invalid.sh@84 -- # [[ 2024/11/21 02:28:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode2183], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:10.212 request: 00:12:10.212 { 00:12:10.212 "method": "nvmf_create_subsystem", 00:12:10.212 "params": { 00:12:10.212 "nqn": "nqn.2016-06.io.spdk:cnode2183", 00:12:10.212 "min_cntlid": 6, 00:12:10.212 "max_cntlid": 5 00:12:10.212 } 00:12:10.212 } 00:12:10.212 Got JSON-RPC error response 00:12:10.212 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.212 02:28:50 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:10.470 02:28:50 -- target/invalid.sh@87 -- # out='request: 00:12:10.470 { 00:12:10.470 "name": "foobar", 00:12:10.470 "method": "nvmf_delete_target", 00:12:10.470 "req_id": 1 00:12:10.470 } 00:12:10.470 Got JSON-RPC error response 00:12:10.470 response: 00:12:10.470 { 00:12:10.470 "code": -32602, 00:12:10.470 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:10.470 }' 00:12:10.470 02:28:50 -- target/invalid.sh@88 -- # [[ request: 00:12:10.470 { 00:12:10.470 "name": "foobar", 00:12:10.470 "method": "nvmf_delete_target", 00:12:10.470 "req_id": 1 00:12:10.470 } 00:12:10.470 Got JSON-RPC error response 00:12:10.470 response: 00:12:10.470 { 00:12:10.470 "code": -32602, 00:12:10.470 "message": "The specified target doesn't exist, cannot delete it." 00:12:10.470 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:10.470 02:28:50 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:10.470 02:28:50 -- target/invalid.sh@91 -- # nvmftestfini 00:12:10.470 02:28:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:10.470 02:28:50 -- nvmf/common.sh@116 -- # sync 00:12:10.470 02:28:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:10.470 02:28:50 -- nvmf/common.sh@119 -- # set +e 00:12:10.470 02:28:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:10.470 02:28:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:10.470 rmmod nvme_tcp 00:12:10.470 rmmod nvme_fabrics 00:12:10.470 rmmod nvme_keyring 00:12:10.470 02:28:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:10.470 02:28:51 -- nvmf/common.sh@123 -- # set -e 00:12:10.470 02:28:51 -- nvmf/common.sh@124 -- # return 0 00:12:10.470 02:28:51 -- nvmf/common.sh@477 -- # '[' -n 66614 ']' 00:12:10.470 02:28:51 -- nvmf/common.sh@478 -- # killprocess 66614 00:12:10.470 02:28:51 -- common/autotest_common.sh@936 -- # '[' -z 66614 ']' 00:12:10.470 02:28:51 -- common/autotest_common.sh@940 -- # kill -0 66614 00:12:10.470 02:28:51 -- common/autotest_common.sh@941 -- # uname 00:12:10.470 02:28:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.470 02:28:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66614 00:12:10.470 02:28:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:10.470 02:28:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:10.470 killing process with pid 66614 00:12:10.470 02:28:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66614' 00:12:10.470 02:28:51 -- common/autotest_common.sh@955 -- # kill 66614 00:12:10.470 02:28:51 -- common/autotest_common.sh@960 -- # wait 66614 00:12:11.036 02:28:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:11.036 02:28:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:11.036 02:28:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:11.036 02:28:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.036 02:28:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:11.036 02:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.036 02:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.036 02:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.036 02:28:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:11.036 00:12:11.036 real 0m6.122s 00:12:11.036 user 0m23.839s 00:12:11.036 sys 0m1.347s 00:12:11.036 02:28:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:11.036 02:28:51 -- common/autotest_common.sh@10 -- # set +x 00:12:11.036 ************************************ 00:12:11.036 END TEST nvmf_invalid 00:12:11.036 ************************************ 00:12:11.036 02:28:51 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:11.036 02:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:11.036 02:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.036 02:28:51 -- common/autotest_common.sh@10 -- # set +x 00:12:11.036 ************************************ 00:12:11.036 START TEST nvmf_abort 00:12:11.036 ************************************ 00:12:11.036 02:28:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:11.036 * Looking for test storage... 00:12:11.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:11.036 02:28:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:11.036 02:28:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:11.036 02:28:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:11.294 02:28:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:11.294 02:28:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:11.294 02:28:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:11.294 02:28:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:11.294 02:28:51 -- scripts/common.sh@335 -- # IFS=.-: 00:12:11.294 02:28:51 -- scripts/common.sh@335 -- # read -ra ver1 00:12:11.294 02:28:51 -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.294 02:28:51 -- scripts/common.sh@336 -- # read -ra ver2 00:12:11.294 02:28:51 -- scripts/common.sh@337 -- # local 'op=<' 00:12:11.294 02:28:51 -- scripts/common.sh@339 -- # ver1_l=2 00:12:11.294 02:28:51 -- scripts/common.sh@340 -- # ver2_l=1 00:12:11.294 02:28:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:11.294 02:28:51 -- scripts/common.sh@343 -- # case "$op" in 00:12:11.294 02:28:51 -- scripts/common.sh@344 -- # : 1 00:12:11.294 02:28:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:11.294 02:28:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.294 02:28:51 -- scripts/common.sh@364 -- # decimal 1 00:12:11.294 02:28:51 -- scripts/common.sh@352 -- # local d=1 00:12:11.294 02:28:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.294 02:28:51 -- scripts/common.sh@354 -- # echo 1 00:12:11.294 02:28:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:11.294 02:28:51 -- scripts/common.sh@365 -- # decimal 2 00:12:11.294 02:28:51 -- scripts/common.sh@352 -- # local d=2 00:12:11.294 02:28:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.294 02:28:51 -- scripts/common.sh@354 -- # echo 2 00:12:11.294 02:28:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:11.294 02:28:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:11.294 02:28:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:11.294 02:28:51 -- scripts/common.sh@367 -- # return 0 00:12:11.294 02:28:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.294 02:28:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:11.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.294 --rc genhtml_branch_coverage=1 00:12:11.294 --rc genhtml_function_coverage=1 00:12:11.294 --rc genhtml_legend=1 00:12:11.294 --rc geninfo_all_blocks=1 00:12:11.294 --rc geninfo_unexecuted_blocks=1 00:12:11.294 00:12:11.294 ' 00:12:11.294 02:28:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:11.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.294 --rc genhtml_branch_coverage=1 00:12:11.294 --rc genhtml_function_coverage=1 00:12:11.294 --rc genhtml_legend=1 00:12:11.294 --rc geninfo_all_blocks=1 00:12:11.294 --rc geninfo_unexecuted_blocks=1 00:12:11.294 00:12:11.294 ' 00:12:11.294 02:28:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:11.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.294 --rc genhtml_branch_coverage=1 00:12:11.294 --rc genhtml_function_coverage=1 00:12:11.294 --rc genhtml_legend=1 00:12:11.294 --rc geninfo_all_blocks=1 00:12:11.294 --rc geninfo_unexecuted_blocks=1 00:12:11.294 00:12:11.294 ' 00:12:11.294 02:28:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:11.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.294 --rc genhtml_branch_coverage=1 00:12:11.294 --rc genhtml_function_coverage=1 00:12:11.294 --rc genhtml_legend=1 00:12:11.294 --rc geninfo_all_blocks=1 00:12:11.294 --rc geninfo_unexecuted_blocks=1 00:12:11.294 00:12:11.294 ' 00:12:11.294 02:28:51 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:11.294 02:28:51 -- nvmf/common.sh@7 -- # uname -s 00:12:11.294 02:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.294 02:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.294 02:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.294 02:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.294 02:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.294 02:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.294 02:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.294 02:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.294 02:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.294 02:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.294 02:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:11.294 02:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:11.295 02:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.295 02:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.295 02:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:11.295 02:28:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.295 02:28:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.295 02:28:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.295 02:28:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.295 02:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.295 02:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.295 02:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.295 02:28:51 -- paths/export.sh@5 -- # export PATH 00:12:11.295 02:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.295 02:28:51 -- nvmf/common.sh@46 -- # : 0 00:12:11.295 02:28:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:11.295 02:28:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:11.295 02:28:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:11.295 02:28:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.295 02:28:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.295 02:28:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:11.295 02:28:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:11.295 02:28:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:11.295 02:28:51 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:11.295 02:28:51 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:11.295 02:28:51 -- target/abort.sh@14 -- # nvmftestinit 00:12:11.295 02:28:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:11.295 02:28:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.295 02:28:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:11.295 02:28:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:11.295 02:28:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:11.295 02:28:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.295 02:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.295 02:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.295 02:28:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:11.295 02:28:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:11.295 02:28:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:11.295 02:28:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:11.295 02:28:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:11.295 02:28:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:11.295 02:28:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.295 02:28:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.295 02:28:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:11.295 02:28:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:11.295 02:28:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:11.295 02:28:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:11.295 02:28:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:11.295 02:28:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.295 02:28:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:11.295 02:28:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:11.295 02:28:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:11.295 02:28:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:11.295 02:28:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:11.295 02:28:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:11.295 Cannot find device "nvmf_tgt_br" 00:12:11.295 02:28:51 -- nvmf/common.sh@154 -- # true 00:12:11.295 02:28:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.295 Cannot find device "nvmf_tgt_br2" 00:12:11.295 02:28:51 -- nvmf/common.sh@155 -- # true 00:12:11.295 02:28:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:11.295 02:28:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:11.295 Cannot find device "nvmf_tgt_br" 00:12:11.295 02:28:51 -- nvmf/common.sh@157 -- # true 00:12:11.295 02:28:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:11.295 Cannot find device "nvmf_tgt_br2" 00:12:11.295 02:28:51 -- nvmf/common.sh@158 -- # true 00:12:11.295 02:28:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:11.295 02:28:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:11.295 02:28:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.295 02:28:51 -- nvmf/common.sh@161 -- # true 00:12:11.295 02:28:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.295 02:28:51 -- nvmf/common.sh@162 -- # true 00:12:11.295 02:28:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:11.295 02:28:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:11.295 02:28:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.295 02:28:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.295 02:28:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.295 02:28:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.295 02:28:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.295 02:28:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:11.295 02:28:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:11.295 02:28:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:11.553 02:28:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:11.553 02:28:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:11.553 02:28:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:11.553 02:28:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.553 02:28:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.553 02:28:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.553 02:28:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:11.553 02:28:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:11.553 02:28:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.553 02:28:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.553 02:28:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.553 02:28:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.553 02:28:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.553 02:28:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:11.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:11.553 00:12:11.553 --- 10.0.0.2 ping statistics --- 00:12:11.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.553 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:11.553 02:28:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:11.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:12:11.553 00:12:11.553 --- 10.0.0.3 ping statistics --- 00:12:11.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.553 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:11.553 02:28:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:11.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:12:11.553 00:12:11.553 --- 10.0.0.1 ping statistics --- 00:12:11.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.553 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:11.553 02:28:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.553 02:28:52 -- nvmf/common.sh@421 -- # return 0 00:12:11.553 02:28:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:11.553 02:28:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.553 02:28:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:11.553 02:28:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:11.553 02:28:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.553 02:28:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:11.553 02:28:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:11.553 02:28:52 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:11.553 02:28:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:11.553 02:28:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.553 02:28:52 -- common/autotest_common.sh@10 -- # set +x 00:12:11.553 02:28:52 -- nvmf/common.sh@469 -- # nvmfpid=67126 00:12:11.554 02:28:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:11.554 02:28:52 -- nvmf/common.sh@470 -- # waitforlisten 67126 00:12:11.554 02:28:52 -- common/autotest_common.sh@829 -- # '[' -z 67126 ']' 00:12:11.554 02:28:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.554 02:28:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.554 02:28:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.554 02:28:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.554 02:28:52 -- common/autotest_common.sh@10 -- # set +x 00:12:11.554 [2024-11-21 02:28:52.109938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:11.554 [2024-11-21 02:28:52.110035] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.811 [2024-11-21 02:28:52.241817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:11.811 [2024-11-21 02:28:52.382638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.811 [2024-11-21 02:28:52.382837] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.811 [2024-11-21 02:28:52.382856] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.811 [2024-11-21 02:28:52.382869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.811 [2024-11-21 02:28:52.383046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.811 [2024-11-21 02:28:52.383519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.811 [2024-11-21 02:28:52.383529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.745 02:28:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.745 02:28:53 -- common/autotest_common.sh@862 -- # return 0 00:12:12.745 02:28:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:12.745 02:28:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 02:28:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.745 02:28:53 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 [2024-11-21 02:28:53.262646] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 Malloc0 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 Delay0 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 [2024-11-21 02:28:53.335815] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.745 02:28:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.745 02:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 02:28:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.745 02:28:53 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:13.043 [2024-11-21 02:28:53.510157] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:14.964 Initializing NVMe Controllers 00:12:14.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:14.964 controller IO queue size 128 less than required 00:12:14.964 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:14.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:14.964 Initialization complete. Launching workers. 00:12:14.964 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30510 00:12:14.964 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30571, failed to submit 62 00:12:14.964 success 30510, unsuccess 61, failed 0 00:12:14.964 02:28:55 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:14.964 02:28:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.964 02:28:55 -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 02:28:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.964 02:28:55 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:14.964 02:28:55 -- target/abort.sh@38 -- # nvmftestfini 00:12:14.964 02:28:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:14.964 02:28:55 -- nvmf/common.sh@116 -- # sync 00:12:16.341 02:28:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:16.341 02:28:56 -- nvmf/common.sh@119 -- # set +e 00:12:16.341 02:28:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:16.341 02:28:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:16.341 rmmod nvme_tcp 00:12:16.341 rmmod nvme_fabrics 00:12:16.341 rmmod nvme_keyring 00:12:16.341 02:28:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:16.341 02:28:56 -- nvmf/common.sh@123 -- # set -e 00:12:16.341 02:28:56 -- nvmf/common.sh@124 -- # return 0 00:12:16.341 02:28:56 -- nvmf/common.sh@477 -- # '[' -n 67126 ']' 00:12:16.341 02:28:56 -- nvmf/common.sh@478 -- # killprocess 67126 00:12:16.341 02:28:56 -- common/autotest_common.sh@936 -- # '[' -z 67126 ']' 00:12:16.341 02:28:56 -- common/autotest_common.sh@940 -- # kill -0 67126 00:12:16.341 02:28:56 -- common/autotest_common.sh@941 -- # uname 00:12:16.341 02:28:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.341 02:28:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67126 00:12:16.341 02:28:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:16.341 02:28:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:16.341 killing process with pid 67126 00:12:16.341 02:28:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67126' 00:12:16.341 02:28:56 -- common/autotest_common.sh@955 -- # kill 67126 00:12:16.341 02:28:56 -- common/autotest_common.sh@960 -- # wait 67126 00:12:16.600 02:28:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:16.600 02:28:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:16.600 02:28:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:16.600 02:28:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.600 02:28:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:16.600 02:28:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.600 02:28:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.600 02:28:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.600 02:28:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:16.600 00:12:16.600 real 0m5.668s 00:12:16.600 user 0m16.406s 00:12:16.600 sys 0m0.977s 00:12:16.600 02:28:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.600 ************************************ 00:12:16.600 END TEST nvmf_abort 00:12:16.600 02:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:16.600 ************************************ 00:12:16.859 02:28:57 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:16.859 02:28:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.860 02:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.860 02:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 ************************************ 00:12:16.860 START TEST nvmf_ns_hotplug_stress 00:12:16.860 ************************************ 00:12:16.860 02:28:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:16.860 * Looking for test storage... 00:12:16.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.860 02:28:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:16.860 02:28:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:16.860 02:28:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:16.860 02:28:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:16.860 02:28:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:16.860 02:28:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:16.860 02:28:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:16.860 02:28:57 -- scripts/common.sh@335 -- # IFS=.-: 00:12:16.860 02:28:57 -- scripts/common.sh@335 -- # read -ra ver1 00:12:16.860 02:28:57 -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.860 02:28:57 -- scripts/common.sh@336 -- # read -ra ver2 00:12:16.860 02:28:57 -- scripts/common.sh@337 -- # local 'op=<' 00:12:16.860 02:28:57 -- scripts/common.sh@339 -- # ver1_l=2 00:12:16.860 02:28:57 -- scripts/common.sh@340 -- # ver2_l=1 00:12:16.860 02:28:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:16.860 02:28:57 -- scripts/common.sh@343 -- # case "$op" in 00:12:16.860 02:28:57 -- scripts/common.sh@344 -- # : 1 00:12:16.860 02:28:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:16.860 02:28:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.860 02:28:57 -- scripts/common.sh@364 -- # decimal 1 00:12:16.860 02:28:57 -- scripts/common.sh@352 -- # local d=1 00:12:16.860 02:28:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.860 02:28:57 -- scripts/common.sh@354 -- # echo 1 00:12:16.860 02:28:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:16.860 02:28:57 -- scripts/common.sh@365 -- # decimal 2 00:12:16.860 02:28:57 -- scripts/common.sh@352 -- # local d=2 00:12:16.860 02:28:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.860 02:28:57 -- scripts/common.sh@354 -- # echo 2 00:12:16.860 02:28:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:16.860 02:28:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:16.860 02:28:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:16.860 02:28:57 -- scripts/common.sh@367 -- # return 0 00:12:16.860 02:28:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.860 02:28:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.860 --rc genhtml_branch_coverage=1 00:12:16.860 --rc genhtml_function_coverage=1 00:12:16.860 --rc genhtml_legend=1 00:12:16.860 --rc geninfo_all_blocks=1 00:12:16.860 --rc geninfo_unexecuted_blocks=1 00:12:16.860 00:12:16.860 ' 00:12:16.860 02:28:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.860 --rc genhtml_branch_coverage=1 00:12:16.860 --rc genhtml_function_coverage=1 00:12:16.860 --rc genhtml_legend=1 00:12:16.860 --rc geninfo_all_blocks=1 00:12:16.860 --rc geninfo_unexecuted_blocks=1 00:12:16.860 00:12:16.860 ' 00:12:16.860 02:28:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.860 --rc genhtml_branch_coverage=1 00:12:16.860 --rc genhtml_function_coverage=1 00:12:16.860 --rc genhtml_legend=1 00:12:16.860 --rc geninfo_all_blocks=1 00:12:16.860 --rc geninfo_unexecuted_blocks=1 00:12:16.860 00:12:16.860 ' 00:12:16.860 02:28:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:16.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.860 --rc genhtml_branch_coverage=1 00:12:16.860 --rc genhtml_function_coverage=1 00:12:16.860 --rc genhtml_legend=1 00:12:16.860 --rc geninfo_all_blocks=1 00:12:16.860 --rc geninfo_unexecuted_blocks=1 00:12:16.860 00:12:16.860 ' 00:12:16.860 02:28:57 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.860 02:28:57 -- nvmf/common.sh@7 -- # uname -s 00:12:16.860 02:28:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.860 02:28:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.860 02:28:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.860 02:28:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.860 02:28:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.860 02:28:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.860 02:28:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.860 02:28:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.860 02:28:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.860 02:28:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.860 02:28:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:16.860 02:28:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:16.860 02:28:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.860 02:28:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.860 02:28:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.860 02:28:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.860 02:28:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.860 02:28:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.860 02:28:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.860 02:28:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.860 02:28:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.860 02:28:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.860 02:28:57 -- paths/export.sh@5 -- # export PATH 00:12:16.860 02:28:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.860 02:28:57 -- nvmf/common.sh@46 -- # : 0 00:12:16.860 02:28:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:16.860 02:28:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:16.860 02:28:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:16.860 02:28:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.860 02:28:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.860 02:28:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:16.860 02:28:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:16.860 02:28:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:16.860 02:28:57 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.860 02:28:57 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:16.860 02:28:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:16.860 02:28:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.860 02:28:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:16.860 02:28:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:16.860 02:28:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:16.860 02:28:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.860 02:28:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.860 02:28:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.860 02:28:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:16.860 02:28:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:16.860 02:28:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:16.860 02:28:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:16.860 02:28:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:16.860 02:28:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:16.860 02:28:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.860 02:28:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.860 02:28:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:16.860 02:28:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:16.860 02:28:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.860 02:28:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.860 02:28:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.860 02:28:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.860 02:28:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.860 02:28:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.860 02:28:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.860 02:28:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.860 02:28:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:17.119 02:28:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:17.119 Cannot find device "nvmf_tgt_br" 00:12:17.119 02:28:57 -- nvmf/common.sh@154 -- # true 00:12:17.119 02:28:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:17.119 Cannot find device "nvmf_tgt_br2" 00:12:17.119 02:28:57 -- nvmf/common.sh@155 -- # true 00:12:17.119 02:28:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:17.119 02:28:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:17.119 Cannot find device "nvmf_tgt_br" 00:12:17.119 02:28:57 -- nvmf/common.sh@157 -- # true 00:12:17.119 02:28:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:17.119 Cannot find device "nvmf_tgt_br2" 00:12:17.119 02:28:57 -- nvmf/common.sh@158 -- # true 00:12:17.119 02:28:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:17.119 02:28:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:17.119 02:28:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:17.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.119 02:28:57 -- nvmf/common.sh@161 -- # true 00:12:17.119 02:28:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:17.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:17.119 02:28:57 -- nvmf/common.sh@162 -- # true 00:12:17.119 02:28:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:17.119 02:28:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:17.119 02:28:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:17.119 02:28:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:17.119 02:28:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:17.119 02:28:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:17.119 02:28:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:17.119 02:28:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:17.119 02:28:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:17.119 02:28:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:17.119 02:28:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:17.119 02:28:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:17.119 02:28:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:17.119 02:28:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:17.119 02:28:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:17.119 02:28:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:17.119 02:28:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:17.119 02:28:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:17.119 02:28:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:17.119 02:28:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:17.376 02:28:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:17.376 02:28:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:17.376 02:28:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:17.376 02:28:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:17.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:12:17.376 00:12:17.376 --- 10.0.0.2 ping statistics --- 00:12:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.376 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:17.376 02:28:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:17.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:17.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:12:17.376 00:12:17.376 --- 10.0.0.3 ping statistics --- 00:12:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.376 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:17.376 02:28:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:17.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:12:17.376 00:12:17.376 --- 10.0.0.1 ping statistics --- 00:12:17.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.376 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:12:17.376 02:28:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.376 02:28:57 -- nvmf/common.sh@421 -- # return 0 00:12:17.376 02:28:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:17.376 02:28:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.376 02:28:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:17.376 02:28:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:17.376 02:28:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.376 02:28:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:17.377 02:28:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:17.377 02:28:57 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:17.377 02:28:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:17.377 02:28:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.377 02:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 02:28:57 -- nvmf/common.sh@469 -- # nvmfpid=67413 00:12:17.377 02:28:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:17.377 02:28:57 -- nvmf/common.sh@470 -- # waitforlisten 67413 00:12:17.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.377 02:28:57 -- common/autotest_common.sh@829 -- # '[' -z 67413 ']' 00:12:17.377 02:28:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.377 02:28:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.377 02:28:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.377 02:28:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.377 02:28:57 -- common/autotest_common.sh@10 -- # set +x 00:12:17.377 [2024-11-21 02:28:57.904598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:17.377 [2024-11-21 02:28:57.904681] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.633 [2024-11-21 02:28:58.041521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:17.633 [2024-11-21 02:28:58.154818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.634 [2024-11-21 02:28:58.155246] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.634 [2024-11-21 02:28:58.155374] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.634 [2024-11-21 02:28:58.155583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.634 [2024-11-21 02:28:58.155797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.634 [2024-11-21 02:28:58.155954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.634 [2024-11-21 02:28:58.155940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.567 02:28:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:18.567 02:28:58 -- common/autotest_common.sh@862 -- # return 0 00:12:18.567 02:28:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:18.567 02:28:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:18.567 02:28:58 -- common/autotest_common.sh@10 -- # set +x 00:12:18.567 02:28:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.567 02:28:58 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:18.567 02:28:58 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:18.567 [2024-11-21 02:28:59.138273] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.567 02:28:59 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:18.825 02:28:59 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.084 [2024-11-21 02:28:59.607517] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.084 02:28:59 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:19.343 02:28:59 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:19.603 Malloc0 00:12:19.603 02:29:00 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:19.862 Delay0 00:12:19.862 02:29:00 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.120 02:29:00 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:20.379 NULL1 00:12:20.379 02:29:00 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:20.638 02:29:01 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:20.638 02:29:01 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=67544 00:12:20.638 02:29:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:20.638 02:29:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.016 Read completed with error (sct=0, sc=11) 00:12:22.016 02:29:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:22.016 02:29:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:22.016 02:29:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:22.275 true 00:12:22.275 02:29:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:22.275 02:29:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.210 02:29:03 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.469 02:29:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:23.469 02:29:03 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:23.728 true 00:12:23.728 02:29:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:23.728 02:29:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.987 02:29:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.245 02:29:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:24.245 02:29:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:24.502 true 00:12:24.502 02:29:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:24.502 02:29:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.759 02:29:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.017 02:29:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:25.017 02:29:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:25.276 true 00:12:25.276 02:29:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:25.276 02:29:05 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.212 02:29:06 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.471 02:29:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:26.471 02:29:06 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:26.730 true 00:12:26.730 02:29:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:26.730 02:29:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.992 02:29:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.992 02:29:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:26.992 02:29:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:27.251 true 00:12:27.251 02:29:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:27.251 02:29:07 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.243 02:29:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.502 02:29:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:28.502 02:29:08 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:28.502 true 00:12:28.502 02:29:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:28.502 02:29:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.761 02:29:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.020 02:29:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:29.020 02:29:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:29.279 true 00:12:29.279 02:29:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:29.279 02:29:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.216 02:29:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.475 02:29:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:30.475 02:29:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:30.735 true 00:12:30.735 02:29:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:30.735 02:29:11 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.994 02:29:11 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.253 02:29:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:31.253 02:29:11 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:31.512 true 00:12:31.512 02:29:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:31.512 02:29:12 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.770 02:29:12 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.336 02:29:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:32.336 02:29:12 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:32.594 true 00:12:32.594 02:29:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:32.594 02:29:13 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.851 02:29:13 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.109 02:29:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:33.109 02:29:13 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:33.366 true 00:12:33.624 02:29:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:33.624 02:29:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.882 02:29:14 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.140 02:29:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:34.140 02:29:14 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:34.398 true 00:12:34.398 02:29:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:34.398 02:29:14 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.657 02:29:15 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.916 02:29:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:34.916 02:29:15 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:35.175 true 00:12:35.175 02:29:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:35.175 02:29:15 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.111 02:29:16 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.369 02:29:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:36.369 02:29:16 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:36.629 true 00:12:36.629 02:29:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:36.629 02:29:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.565 02:29:17 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.565 02:29:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:37.565 02:29:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:37.824 true 00:12:37.824 02:29:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:37.824 02:29:18 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.083 02:29:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.342 02:29:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:38.342 02:29:18 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:38.601 true 00:12:38.601 02:29:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:38.601 02:29:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.537 02:29:19 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.796 02:29:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:39.796 02:29:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:39.796 true 00:12:39.796 02:29:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:39.796 02:29:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.056 02:29:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.315 02:29:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:40.315 02:29:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:40.574 true 00:12:40.574 02:29:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:40.574 02:29:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.511 02:29:21 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.770 02:29:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:41.770 02:29:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:42.030 true 00:12:42.030 02:29:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:42.030 02:29:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.290 02:29:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.549 02:29:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:42.549 02:29:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:42.807 true 00:12:42.807 02:29:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:42.807 02:29:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.407 02:29:23 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.975 02:29:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:43.975 02:29:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:43.975 true 00:12:43.975 02:29:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:43.975 02:29:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.543 02:29:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.543 02:29:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:44.543 02:29:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:44.801 true 00:12:44.801 02:29:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:44.801 02:29:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.060 02:29:25 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.318 02:29:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:45.318 02:29:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:45.577 true 00:12:45.577 02:29:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:45.577 02:29:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.514 02:29:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:46.773 02:29:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:46.773 02:29:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:47.031 true 00:12:47.031 02:29:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:47.031 02:29:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.290 02:29:27 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.548 02:29:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:47.548 02:29:27 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:47.548 true 00:12:47.548 02:29:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:47.548 02:29:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.485 02:29:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.742 02:29:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:48.742 02:29:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:49.000 true 00:12:49.000 02:29:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:49.000 02:29:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.259 02:29:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.522 02:29:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:49.522 02:29:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:49.522 true 00:12:49.522 02:29:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:49.522 02:29:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.459 02:29:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.719 Initializing NVMe Controllers 00:12:50.719 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:50.719 Controller IO queue size 128, less than required. 00:12:50.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.719 Controller IO queue size 128, less than required. 00:12:50.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:50.719 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:50.719 Initialization complete. Launching workers. 00:12:50.719 ======================================================== 00:12:50.719 Latency(us) 00:12:50.719 Device Information : IOPS MiB/s Average min max 00:12:50.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 500.87 0.24 122237.79 3131.49 1045237.61 00:12:50.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10359.02 5.06 12356.21 3273.80 617461.13 00:12:50.719 ======================================================== 00:12:50.719 Total : 10859.88 5.30 17424.03 3131.49 1045237.61 00:12:50.719 00:12:50.719 02:29:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:12:50.719 02:29:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:12:50.977 true 00:12:50.977 02:29:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 67544 00:12:50.977 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (67544) - No such process 00:12:50.977 02:29:31 -- target/ns_hotplug_stress.sh@53 -- # wait 67544 00:12:50.977 02:29:31 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.236 02:29:31 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:51.495 02:29:31 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:51.495 02:29:31 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:51.495 02:29:31 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:51.495 02:29:31 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:51.495 02:29:31 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:51.754 null0 00:12:51.754 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:51.754 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:51.754 02:29:32 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:52.012 null1 00:12:52.012 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.012 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.012 02:29:32 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:52.012 null2 00:12:52.012 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.012 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.012 02:29:32 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:52.271 null3 00:12:52.271 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.271 02:29:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.271 02:29:32 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:52.529 null4 00:12:52.530 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.530 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.530 02:29:33 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:52.788 null5 00:12:52.788 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:52.788 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:52.788 02:29:33 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:53.047 null6 00:12:53.047 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:53.047 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:53.047 02:29:33 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:53.306 null7 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@66 -- # wait 68572 68573 68576 68578 68579 68581 68584 68586 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.306 02:29:33 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.566 02:29:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:53.566 02:29:33 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.566 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.825 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.084 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:54.343 02:29:34 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:54.602 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:54.862 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.121 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:55.380 02:29:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.639 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:55.898 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:56.157 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.416 02:29:36 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:56.416 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.416 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.416 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.675 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:56.942 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:57.203 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.461 02:29:37 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:57.461 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.462 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.462 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:57.462 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.462 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.462 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:57.720 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:57.979 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.238 02:29:38 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:58.497 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:58.497 02:29:38 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:58.497 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.497 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.497 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.497 02:29:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.497 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.816 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.816 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.816 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:58.816 02:29:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:58.816 02:29:39 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:58.816 02:29:39 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:58.816 02:29:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.816 02:29:39 -- nvmf/common.sh@116 -- # sync 00:12:58.816 02:29:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.816 02:29:39 -- nvmf/common.sh@119 -- # set +e 00:12:58.816 02:29:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.816 02:29:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.816 rmmod nvme_tcp 00:12:58.816 rmmod nvme_fabrics 00:12:58.816 rmmod nvme_keyring 00:12:58.816 02:29:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.816 02:29:39 -- nvmf/common.sh@123 -- # set -e 00:12:58.816 02:29:39 -- nvmf/common.sh@124 -- # return 0 00:12:58.816 02:29:39 -- nvmf/common.sh@477 -- # '[' -n 67413 ']' 00:12:58.816 02:29:39 -- nvmf/common.sh@478 -- # killprocess 67413 00:12:58.816 02:29:39 -- common/autotest_common.sh@936 -- # '[' -z 67413 ']' 00:12:58.816 02:29:39 -- common/autotest_common.sh@940 -- # kill -0 67413 00:12:58.816 02:29:39 -- common/autotest_common.sh@941 -- # uname 00:12:58.816 02:29:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.816 02:29:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67413 00:12:58.816 killing process with pid 67413 00:12:58.816 02:29:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:58.816 02:29:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:58.816 02:29:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67413' 00:12:58.816 02:29:39 -- common/autotest_common.sh@955 -- # kill 67413 00:12:58.816 02:29:39 -- common/autotest_common.sh@960 -- # wait 67413 00:12:59.383 02:29:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:59.383 02:29:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:59.383 02:29:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:59.383 02:29:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.383 02:29:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:59.383 02:29:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.383 02:29:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.383 02:29:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.383 02:29:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:59.383 00:12:59.383 real 0m42.552s 00:12:59.383 user 3m24.346s 00:12:59.383 sys 0m12.511s 00:12:59.383 02:29:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.383 02:29:39 -- common/autotest_common.sh@10 -- # set +x 00:12:59.383 ************************************ 00:12:59.383 END TEST nvmf_ns_hotplug_stress 00:12:59.383 ************************************ 00:12:59.383 02:29:39 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:59.383 02:29:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:59.383 02:29:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.383 02:29:39 -- common/autotest_common.sh@10 -- # set +x 00:12:59.383 ************************************ 00:12:59.384 START TEST nvmf_connect_stress 00:12:59.384 ************************************ 00:12:59.384 02:29:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:59.384 * Looking for test storage... 00:12:59.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:59.384 02:29:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:59.384 02:29:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:59.384 02:29:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:59.384 02:29:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:59.384 02:29:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:59.384 02:29:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:59.384 02:29:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:59.643 02:29:40 -- scripts/common.sh@335 -- # IFS=.-: 00:12:59.643 02:29:40 -- scripts/common.sh@335 -- # read -ra ver1 00:12:59.643 02:29:40 -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.643 02:29:40 -- scripts/common.sh@336 -- # read -ra ver2 00:12:59.643 02:29:40 -- scripts/common.sh@337 -- # local 'op=<' 00:12:59.643 02:29:40 -- scripts/common.sh@339 -- # ver1_l=2 00:12:59.643 02:29:40 -- scripts/common.sh@340 -- # ver2_l=1 00:12:59.643 02:29:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:59.643 02:29:40 -- scripts/common.sh@343 -- # case "$op" in 00:12:59.643 02:29:40 -- scripts/common.sh@344 -- # : 1 00:12:59.643 02:29:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:59.643 02:29:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.643 02:29:40 -- scripts/common.sh@364 -- # decimal 1 00:12:59.643 02:29:40 -- scripts/common.sh@352 -- # local d=1 00:12:59.643 02:29:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.643 02:29:40 -- scripts/common.sh@354 -- # echo 1 00:12:59.643 02:29:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:59.643 02:29:40 -- scripts/common.sh@365 -- # decimal 2 00:12:59.643 02:29:40 -- scripts/common.sh@352 -- # local d=2 00:12:59.643 02:29:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.643 02:29:40 -- scripts/common.sh@354 -- # echo 2 00:12:59.643 02:29:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:59.643 02:29:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:59.643 02:29:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:59.643 02:29:40 -- scripts/common.sh@367 -- # return 0 00:12:59.643 02:29:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.643 02:29:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:59.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.643 --rc genhtml_branch_coverage=1 00:12:59.643 --rc genhtml_function_coverage=1 00:12:59.643 --rc genhtml_legend=1 00:12:59.643 --rc geninfo_all_blocks=1 00:12:59.643 --rc geninfo_unexecuted_blocks=1 00:12:59.643 00:12:59.643 ' 00:12:59.643 02:29:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:59.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.643 --rc genhtml_branch_coverage=1 00:12:59.643 --rc genhtml_function_coverage=1 00:12:59.643 --rc genhtml_legend=1 00:12:59.643 --rc geninfo_all_blocks=1 00:12:59.643 --rc geninfo_unexecuted_blocks=1 00:12:59.643 00:12:59.643 ' 00:12:59.643 02:29:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:59.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.643 --rc genhtml_branch_coverage=1 00:12:59.643 --rc genhtml_function_coverage=1 00:12:59.643 --rc genhtml_legend=1 00:12:59.643 --rc geninfo_all_blocks=1 00:12:59.643 --rc geninfo_unexecuted_blocks=1 00:12:59.643 00:12:59.643 ' 00:12:59.643 02:29:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:59.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.643 --rc genhtml_branch_coverage=1 00:12:59.643 --rc genhtml_function_coverage=1 00:12:59.643 --rc genhtml_legend=1 00:12:59.643 --rc geninfo_all_blocks=1 00:12:59.643 --rc geninfo_unexecuted_blocks=1 00:12:59.643 00:12:59.643 ' 00:12:59.643 02:29:40 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:59.643 02:29:40 -- nvmf/common.sh@7 -- # uname -s 00:12:59.643 02:29:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.643 02:29:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.643 02:29:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.643 02:29:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.643 02:29:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.643 02:29:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.643 02:29:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.643 02:29:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.643 02:29:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.643 02:29:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.643 02:29:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:59.643 02:29:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:12:59.643 02:29:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.643 02:29:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.643 02:29:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:59.643 02:29:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:59.643 02:29:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.643 02:29:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.643 02:29:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.643 02:29:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.643 02:29:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.644 02:29:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.644 02:29:40 -- paths/export.sh@5 -- # export PATH 00:12:59.644 02:29:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.644 02:29:40 -- nvmf/common.sh@46 -- # : 0 00:12:59.644 02:29:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.644 02:29:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.644 02:29:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.644 02:29:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.644 02:29:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.644 02:29:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.644 02:29:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.644 02:29:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.644 02:29:40 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:59.644 02:29:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.644 02:29:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.644 02:29:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.644 02:29:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.644 02:29:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.644 02:29:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.644 02:29:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.644 02:29:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.644 02:29:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:59.644 02:29:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:59.644 02:29:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:59.644 02:29:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:59.644 02:29:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:59.644 02:29:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:59.644 02:29:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.644 02:29:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.644 02:29:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:59.644 02:29:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:59.644 02:29:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:59.644 02:29:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:59.644 02:29:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:59.644 02:29:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.644 02:29:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:59.644 02:29:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:59.644 02:29:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:59.644 02:29:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:59.644 02:29:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:59.644 02:29:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:59.644 Cannot find device "nvmf_tgt_br" 00:12:59.644 02:29:40 -- nvmf/common.sh@154 -- # true 00:12:59.644 02:29:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:59.644 Cannot find device "nvmf_tgt_br2" 00:12:59.644 02:29:40 -- nvmf/common.sh@155 -- # true 00:12:59.644 02:29:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:59.644 02:29:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:59.644 Cannot find device "nvmf_tgt_br" 00:12:59.644 02:29:40 -- nvmf/common.sh@157 -- # true 00:12:59.644 02:29:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:59.644 Cannot find device "nvmf_tgt_br2" 00:12:59.644 02:29:40 -- nvmf/common.sh@158 -- # true 00:12:59.644 02:29:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:59.644 02:29:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:59.644 02:29:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:59.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.644 02:29:40 -- nvmf/common.sh@161 -- # true 00:12:59.644 02:29:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:59.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:59.644 02:29:40 -- nvmf/common.sh@162 -- # true 00:12:59.644 02:29:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:59.644 02:29:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:59.644 02:29:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.644 02:29:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.644 02:29:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.644 02:29:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.644 02:29:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.644 02:29:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:59.644 02:29:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:59.644 02:29:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:59.644 02:29:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:59.644 02:29:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:59.644 02:29:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:59.644 02:29:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.903 02:29:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.903 02:29:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.903 02:29:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:59.903 02:29:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:59.903 02:29:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.903 02:29:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.903 02:29:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.903 02:29:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.903 02:29:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.903 02:29:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:59.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:59.903 00:12:59.903 --- 10.0.0.2 ping statistics --- 00:12:59.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.903 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:59.903 02:29:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:59.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:59.903 00:12:59.903 --- 10.0.0.3 ping statistics --- 00:12:59.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.903 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:59.903 02:29:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:59.903 00:12:59.903 --- 10.0.0.1 ping statistics --- 00:12:59.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.904 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:59.904 02:29:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.904 02:29:40 -- nvmf/common.sh@421 -- # return 0 00:12:59.904 02:29:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:59.904 02:29:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.904 02:29:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:59.904 02:29:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:59.904 02:29:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.904 02:29:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:59.904 02:29:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:59.904 02:29:40 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:59.904 02:29:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:59.904 02:29:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.904 02:29:40 -- common/autotest_common.sh@10 -- # set +x 00:12:59.904 02:29:40 -- nvmf/common.sh@469 -- # nvmfpid=69913 00:12:59.904 02:29:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:59.904 02:29:40 -- nvmf/common.sh@470 -- # waitforlisten 69913 00:12:59.904 02:29:40 -- common/autotest_common.sh@829 -- # '[' -z 69913 ']' 00:12:59.904 02:29:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.904 02:29:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.904 02:29:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.904 02:29:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.904 02:29:40 -- common/autotest_common.sh@10 -- # set +x 00:12:59.904 [2024-11-21 02:29:40.484058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.904 [2024-11-21 02:29:40.484141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.163 [2024-11-21 02:29:40.623486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.163 [2024-11-21 02:29:40.740250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:00.163 [2024-11-21 02:29:40.740395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.163 [2024-11-21 02:29:40.740409] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.163 [2024-11-21 02:29:40.740418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.163 [2024-11-21 02:29:40.740775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.163 [2024-11-21 02:29:40.741018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.163 [2024-11-21 02:29:40.741026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.099 02:29:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.099 02:29:41 -- common/autotest_common.sh@862 -- # return 0 00:13:01.099 02:29:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:01.099 02:29:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.099 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.099 02:29:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.099 02:29:41 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.099 02:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.099 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.099 [2024-11-21 02:29:41.533083] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.099 02:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.099 02:29:41 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.099 02:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.099 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.099 02:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.099 02:29:41 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.099 02:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.099 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.099 [2024-11-21 02:29:41.553670] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.099 02:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.099 02:29:41 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:01.099 02:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.099 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.099 NULL1 00:13:01.099 02:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.099 02:29:41 -- target/connect_stress.sh@21 -- # PERF_PID=69966 00:13:01.099 02:29:41 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:01.099 02:29:41 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:01.099 02:29:41 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:01.099 02:29:41 -- target/connect_stress.sh@28 -- # cat 00:13:01.099 02:29:41 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:01.099 02:29:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.099 02:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.099 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.358 02:29:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.358 02:29:41 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:01.358 02:29:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.358 02:29:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.358 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:13:01.924 02:29:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.924 02:29:42 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:01.924 02:29:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.924 02:29:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.924 02:29:42 -- common/autotest_common.sh@10 -- # set +x 00:13:02.183 02:29:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.183 02:29:42 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:02.183 02:29:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.183 02:29:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.183 02:29:42 -- common/autotest_common.sh@10 -- # set +x 00:13:02.440 02:29:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.440 02:29:42 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:02.440 02:29:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.440 02:29:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.440 02:29:42 -- common/autotest_common.sh@10 -- # set +x 00:13:02.699 02:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.699 02:29:43 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:02.699 02:29:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.699 02:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.699 02:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:02.957 02:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.957 02:29:43 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:02.957 02:29:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.957 02:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.957 02:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.524 02:29:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.524 02:29:43 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:03.524 02:29:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.524 02:29:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.524 02:29:43 -- common/autotest_common.sh@10 -- # set +x 00:13:03.782 02:29:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.782 02:29:44 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:03.782 02:29:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.782 02:29:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.782 02:29:44 -- common/autotest_common.sh@10 -- # set +x 00:13:04.041 02:29:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.041 02:29:44 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:04.041 02:29:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.041 02:29:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.041 02:29:44 -- common/autotest_common.sh@10 -- # set +x 00:13:04.299 02:29:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.299 02:29:44 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:04.299 02:29:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.299 02:29:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.299 02:29:44 -- common/autotest_common.sh@10 -- # set +x 00:13:04.866 02:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.866 02:29:45 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:04.866 02:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.866 02:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.866 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:05.125 02:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.125 02:29:45 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:05.125 02:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.125 02:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.125 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:05.383 02:29:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.383 02:29:45 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:05.383 02:29:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.383 02:29:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.383 02:29:45 -- common/autotest_common.sh@10 -- # set +x 00:13:05.642 02:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.642 02:29:46 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:05.642 02:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.642 02:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.642 02:29:46 -- common/autotest_common.sh@10 -- # set +x 00:13:05.901 02:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.901 02:29:46 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:05.901 02:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.901 02:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.901 02:29:46 -- common/autotest_common.sh@10 -- # set +x 00:13:06.468 02:29:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.468 02:29:46 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:06.468 02:29:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.468 02:29:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.468 02:29:46 -- common/autotest_common.sh@10 -- # set +x 00:13:06.727 02:29:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.727 02:29:47 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:06.727 02:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.727 02:29:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.727 02:29:47 -- common/autotest_common.sh@10 -- # set +x 00:13:06.986 02:29:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.986 02:29:47 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:06.986 02:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.986 02:29:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.986 02:29:47 -- common/autotest_common.sh@10 -- # set +x 00:13:07.245 02:29:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.245 02:29:47 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:07.245 02:29:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.245 02:29:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.245 02:29:47 -- common/autotest_common.sh@10 -- # set +x 00:13:07.504 02:29:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.504 02:29:48 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:07.504 02:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.504 02:29:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.504 02:29:48 -- common/autotest_common.sh@10 -- # set +x 00:13:08.072 02:29:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.072 02:29:48 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:08.072 02:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.072 02:29:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.072 02:29:48 -- common/autotest_common.sh@10 -- # set +x 00:13:08.331 02:29:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.331 02:29:48 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:08.331 02:29:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.331 02:29:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.332 02:29:48 -- common/autotest_common.sh@10 -- # set +x 00:13:08.590 02:29:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.590 02:29:49 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:08.590 02:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.590 02:29:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.590 02:29:49 -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 02:29:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 02:29:49 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:08.848 02:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.848 02:29:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 02:29:49 -- common/autotest_common.sh@10 -- # set +x 00:13:09.107 02:29:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.107 02:29:49 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:09.107 02:29:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.107 02:29:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.107 02:29:49 -- common/autotest_common.sh@10 -- # set +x 00:13:09.674 02:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.674 02:29:50 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:09.674 02:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.674 02:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.674 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:13:09.933 02:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.933 02:29:50 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:09.933 02:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.933 02:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.933 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:13:10.192 02:29:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.192 02:29:50 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:10.192 02:29:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.192 02:29:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.192 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:13:10.451 02:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.451 02:29:51 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:10.451 02:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.451 02:29:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.451 02:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:10.709 02:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.709 02:29:51 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:10.709 02:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.709 02:29:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.709 02:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:11.277 02:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.277 02:29:51 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:11.277 02:29:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.277 02:29:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.277 02:29:51 -- common/autotest_common.sh@10 -- # set +x 00:13:11.277 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:11.536 02:29:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.536 02:29:51 -- target/connect_stress.sh@34 -- # kill -0 69966 00:13:11.536 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69966) - No such process 00:13:11.536 02:29:51 -- target/connect_stress.sh@38 -- # wait 69966 00:13:11.536 02:29:51 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:11.536 02:29:51 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:11.536 02:29:51 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:11.536 02:29:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:11.536 02:29:51 -- nvmf/common.sh@116 -- # sync 00:13:11.536 02:29:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:11.536 02:29:52 -- nvmf/common.sh@119 -- # set +e 00:13:11.536 02:29:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:11.536 02:29:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:11.536 rmmod nvme_tcp 00:13:11.536 rmmod nvme_fabrics 00:13:11.536 rmmod nvme_keyring 00:13:11.536 02:29:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:11.536 02:29:52 -- nvmf/common.sh@123 -- # set -e 00:13:11.536 02:29:52 -- nvmf/common.sh@124 -- # return 0 00:13:11.536 02:29:52 -- nvmf/common.sh@477 -- # '[' -n 69913 ']' 00:13:11.536 02:29:52 -- nvmf/common.sh@478 -- # killprocess 69913 00:13:11.536 02:29:52 -- common/autotest_common.sh@936 -- # '[' -z 69913 ']' 00:13:11.536 02:29:52 -- common/autotest_common.sh@940 -- # kill -0 69913 00:13:11.536 02:29:52 -- common/autotest_common.sh@941 -- # uname 00:13:11.536 02:29:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.536 02:29:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69913 00:13:11.536 02:29:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:11.536 02:29:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:11.536 02:29:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69913' 00:13:11.536 killing process with pid 69913 00:13:11.536 02:29:52 -- common/autotest_common.sh@955 -- # kill 69913 00:13:11.536 02:29:52 -- common/autotest_common.sh@960 -- # wait 69913 00:13:11.795 02:29:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:11.795 02:29:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:11.795 02:29:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:11.795 02:29:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.795 02:29:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:11.795 02:29:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.795 02:29:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.795 02:29:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.795 02:29:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:11.795 00:13:11.795 real 0m12.538s 00:13:11.795 user 0m41.783s 00:13:11.795 sys 0m3.055s 00:13:11.795 02:29:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:11.795 ************************************ 00:13:11.795 END TEST nvmf_connect_stress 00:13:11.795 02:29:52 -- common/autotest_common.sh@10 -- # set +x 00:13:11.795 ************************************ 00:13:12.053 02:29:52 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:12.053 02:29:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:12.053 02:29:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.053 02:29:52 -- common/autotest_common.sh@10 -- # set +x 00:13:12.053 ************************************ 00:13:12.053 START TEST nvmf_fused_ordering 00:13:12.053 ************************************ 00:13:12.053 02:29:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:12.053 * Looking for test storage... 00:13:12.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:12.053 02:29:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:12.053 02:29:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:12.053 02:29:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:12.053 02:29:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:12.053 02:29:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:12.053 02:29:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:12.053 02:29:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:12.053 02:29:52 -- scripts/common.sh@335 -- # IFS=.-: 00:13:12.053 02:29:52 -- scripts/common.sh@335 -- # read -ra ver1 00:13:12.053 02:29:52 -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.053 02:29:52 -- scripts/common.sh@336 -- # read -ra ver2 00:13:12.053 02:29:52 -- scripts/common.sh@337 -- # local 'op=<' 00:13:12.053 02:29:52 -- scripts/common.sh@339 -- # ver1_l=2 00:13:12.053 02:29:52 -- scripts/common.sh@340 -- # ver2_l=1 00:13:12.053 02:29:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:12.053 02:29:52 -- scripts/common.sh@343 -- # case "$op" in 00:13:12.053 02:29:52 -- scripts/common.sh@344 -- # : 1 00:13:12.053 02:29:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:12.053 02:29:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.053 02:29:52 -- scripts/common.sh@364 -- # decimal 1 00:13:12.053 02:29:52 -- scripts/common.sh@352 -- # local d=1 00:13:12.053 02:29:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.053 02:29:52 -- scripts/common.sh@354 -- # echo 1 00:13:12.053 02:29:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:12.053 02:29:52 -- scripts/common.sh@365 -- # decimal 2 00:13:12.053 02:29:52 -- scripts/common.sh@352 -- # local d=2 00:13:12.053 02:29:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.053 02:29:52 -- scripts/common.sh@354 -- # echo 2 00:13:12.053 02:29:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:12.053 02:29:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:12.054 02:29:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:12.054 02:29:52 -- scripts/common.sh@367 -- # return 0 00:13:12.054 02:29:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.054 02:29:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.054 --rc genhtml_branch_coverage=1 00:13:12.054 --rc genhtml_function_coverage=1 00:13:12.054 --rc genhtml_legend=1 00:13:12.054 --rc geninfo_all_blocks=1 00:13:12.054 --rc geninfo_unexecuted_blocks=1 00:13:12.054 00:13:12.054 ' 00:13:12.054 02:29:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.054 --rc genhtml_branch_coverage=1 00:13:12.054 --rc genhtml_function_coverage=1 00:13:12.054 --rc genhtml_legend=1 00:13:12.054 --rc geninfo_all_blocks=1 00:13:12.054 --rc geninfo_unexecuted_blocks=1 00:13:12.054 00:13:12.054 ' 00:13:12.054 02:29:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.054 --rc genhtml_branch_coverage=1 00:13:12.054 --rc genhtml_function_coverage=1 00:13:12.054 --rc genhtml_legend=1 00:13:12.054 --rc geninfo_all_blocks=1 00:13:12.054 --rc geninfo_unexecuted_blocks=1 00:13:12.054 00:13:12.054 ' 00:13:12.054 02:29:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:12.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.054 --rc genhtml_branch_coverage=1 00:13:12.054 --rc genhtml_function_coverage=1 00:13:12.054 --rc genhtml_legend=1 00:13:12.054 --rc geninfo_all_blocks=1 00:13:12.054 --rc geninfo_unexecuted_blocks=1 00:13:12.054 00:13:12.054 ' 00:13:12.054 02:29:52 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:12.054 02:29:52 -- nvmf/common.sh@7 -- # uname -s 00:13:12.054 02:29:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.054 02:29:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.054 02:29:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.054 02:29:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.054 02:29:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.054 02:29:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.054 02:29:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.054 02:29:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.054 02:29:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.054 02:29:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.054 02:29:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:13:12.054 02:29:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:13:12.054 02:29:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.054 02:29:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.054 02:29:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:12.054 02:29:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.054 02:29:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.054 02:29:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.054 02:29:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.054 02:29:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.054 02:29:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.054 02:29:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.054 02:29:52 -- paths/export.sh@5 -- # export PATH 00:13:12.054 02:29:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.054 02:29:52 -- nvmf/common.sh@46 -- # : 0 00:13:12.054 02:29:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:12.054 02:29:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:12.054 02:29:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:12.054 02:29:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.054 02:29:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.054 02:29:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:12.054 02:29:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:12.054 02:29:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:12.054 02:29:52 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:12.054 02:29:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:12.054 02:29:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.054 02:29:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:12.054 02:29:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:12.054 02:29:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:12.054 02:29:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.054 02:29:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.054 02:29:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.054 02:29:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:12.054 02:29:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:12.054 02:29:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:12.054 02:29:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:12.054 02:29:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:12.054 02:29:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:12.054 02:29:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.054 02:29:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.054 02:29:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:12.054 02:29:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:12.054 02:29:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:12.054 02:29:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:12.054 02:29:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:12.054 02:29:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.054 02:29:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:12.054 02:29:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:12.054 02:29:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:12.054 02:29:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:12.054 02:29:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:12.313 02:29:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:12.313 Cannot find device "nvmf_tgt_br" 00:13:12.313 02:29:52 -- nvmf/common.sh@154 -- # true 00:13:12.313 02:29:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.313 Cannot find device "nvmf_tgt_br2" 00:13:12.313 02:29:52 -- nvmf/common.sh@155 -- # true 00:13:12.313 02:29:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:12.313 02:29:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:12.313 Cannot find device "nvmf_tgt_br" 00:13:12.313 02:29:52 -- nvmf/common.sh@157 -- # true 00:13:12.313 02:29:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:12.313 Cannot find device "nvmf_tgt_br2" 00:13:12.313 02:29:52 -- nvmf/common.sh@158 -- # true 00:13:12.313 02:29:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:12.313 02:29:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:12.313 02:29:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.313 02:29:52 -- nvmf/common.sh@161 -- # true 00:13:12.313 02:29:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.313 02:29:52 -- nvmf/common.sh@162 -- # true 00:13:12.313 02:29:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.313 02:29:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.313 02:29:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.313 02:29:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.313 02:29:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.313 02:29:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.313 02:29:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.313 02:29:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.313 02:29:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:12.313 02:29:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:12.313 02:29:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:12.313 02:29:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:12.313 02:29:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:12.313 02:29:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.313 02:29:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.313 02:29:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.313 02:29:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:12.313 02:29:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:12.313 02:29:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.571 02:29:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.571 02:29:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.571 02:29:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.571 02:29:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.571 02:29:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:12.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:12.571 00:13:12.571 --- 10.0.0.2 ping statistics --- 00:13:12.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.571 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:12.571 02:29:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:12.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:13:12.571 00:13:12.571 --- 10.0.0.3 ping statistics --- 00:13:12.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.572 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:12.572 02:29:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:12.572 00:13:12.572 --- 10.0.0.1 ping statistics --- 00:13:12.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.572 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:12.572 02:29:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.572 02:29:53 -- nvmf/common.sh@421 -- # return 0 00:13:12.572 02:29:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:12.572 02:29:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.572 02:29:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:12.572 02:29:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:12.572 02:29:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.572 02:29:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:12.572 02:29:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:12.572 02:29:53 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:12.572 02:29:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:12.572 02:29:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.572 02:29:53 -- common/autotest_common.sh@10 -- # set +x 00:13:12.572 02:29:53 -- nvmf/common.sh@469 -- # nvmfpid=70305 00:13:12.572 02:29:53 -- nvmf/common.sh@470 -- # waitforlisten 70305 00:13:12.572 02:29:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:12.572 02:29:53 -- common/autotest_common.sh@829 -- # '[' -z 70305 ']' 00:13:12.572 02:29:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.572 02:29:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.572 02:29:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.572 02:29:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.572 02:29:53 -- common/autotest_common.sh@10 -- # set +x 00:13:12.572 [2024-11-21 02:29:53.090307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:12.572 [2024-11-21 02:29:53.090368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.830 [2024-11-21 02:29:53.225970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.830 [2024-11-21 02:29:53.336771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:12.830 [2024-11-21 02:29:53.336970] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.830 [2024-11-21 02:29:53.336987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.830 [2024-11-21 02:29:53.336999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.830 [2024-11-21 02:29:53.337041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.765 02:29:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.765 02:29:54 -- common/autotest_common.sh@862 -- # return 0 00:13:13.765 02:29:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:13.765 02:29:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 02:29:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.765 02:29:54 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.765 02:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 [2024-11-21 02:29:54.103990] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.765 02:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.765 02:29:54 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:13.765 02:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 02:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.765 02:29:54 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.765 02:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 [2024-11-21 02:29:54.124132] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.765 02:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.765 02:29:54 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:13.765 02:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 NULL1 00:13:13.765 02:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.765 02:29:54 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:13.765 02:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 02:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.765 02:29:54 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:13.765 02:29:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.765 02:29:54 -- common/autotest_common.sh@10 -- # set +x 00:13:13.765 02:29:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.765 02:29:54 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:13.765 [2024-11-21 02:29:54.176995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:13.765 [2024-11-21 02:29:54.177046] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70355 ] 00:13:14.024 Attached to nqn.2016-06.io.spdk:cnode1 00:13:14.024 Namespace ID: 1 size: 1GB 00:13:14.024 fused_ordering(0) 00:13:14.024 fused_ordering(1) 00:13:14.024 fused_ordering(2) 00:13:14.024 fused_ordering(3) 00:13:14.024 fused_ordering(4) 00:13:14.024 fused_ordering(5) 00:13:14.024 fused_ordering(6) 00:13:14.024 fused_ordering(7) 00:13:14.024 fused_ordering(8) 00:13:14.024 fused_ordering(9) 00:13:14.024 fused_ordering(10) 00:13:14.024 fused_ordering(11) 00:13:14.024 fused_ordering(12) 00:13:14.024 fused_ordering(13) 00:13:14.024 fused_ordering(14) 00:13:14.024 fused_ordering(15) 00:13:14.024 fused_ordering(16) 00:13:14.024 fused_ordering(17) 00:13:14.024 fused_ordering(18) 00:13:14.024 fused_ordering(19) 00:13:14.024 fused_ordering(20) 00:13:14.024 fused_ordering(21) 00:13:14.024 fused_ordering(22) 00:13:14.024 fused_ordering(23) 00:13:14.024 fused_ordering(24) 00:13:14.024 fused_ordering(25) 00:13:14.024 fused_ordering(26) 00:13:14.024 fused_ordering(27) 00:13:14.024 fused_ordering(28) 00:13:14.024 fused_ordering(29) 00:13:14.024 fused_ordering(30) 00:13:14.024 fused_ordering(31) 00:13:14.024 fused_ordering(32) 00:13:14.024 fused_ordering(33) 00:13:14.024 fused_ordering(34) 00:13:14.024 fused_ordering(35) 00:13:14.024 fused_ordering(36) 00:13:14.024 fused_ordering(37) 00:13:14.024 fused_ordering(38) 00:13:14.024 fused_ordering(39) 00:13:14.024 fused_ordering(40) 00:13:14.024 fused_ordering(41) 00:13:14.024 fused_ordering(42) 00:13:14.024 fused_ordering(43) 00:13:14.024 fused_ordering(44) 00:13:14.024 fused_ordering(45) 00:13:14.024 fused_ordering(46) 00:13:14.024 fused_ordering(47) 00:13:14.024 fused_ordering(48) 00:13:14.024 fused_ordering(49) 00:13:14.024 fused_ordering(50) 00:13:14.024 fused_ordering(51) 00:13:14.024 fused_ordering(52) 00:13:14.024 fused_ordering(53) 00:13:14.024 fused_ordering(54) 00:13:14.024 fused_ordering(55) 00:13:14.024 fused_ordering(56) 00:13:14.024 fused_ordering(57) 00:13:14.024 fused_ordering(58) 00:13:14.024 fused_ordering(59) 00:13:14.024 fused_ordering(60) 00:13:14.024 fused_ordering(61) 00:13:14.024 fused_ordering(62) 00:13:14.024 fused_ordering(63) 00:13:14.024 fused_ordering(64) 00:13:14.024 fused_ordering(65) 00:13:14.024 fused_ordering(66) 00:13:14.024 fused_ordering(67) 00:13:14.024 fused_ordering(68) 00:13:14.024 fused_ordering(69) 00:13:14.024 fused_ordering(70) 00:13:14.024 fused_ordering(71) 00:13:14.024 fused_ordering(72) 00:13:14.024 fused_ordering(73) 00:13:14.024 fused_ordering(74) 00:13:14.024 fused_ordering(75) 00:13:14.024 fused_ordering(76) 00:13:14.024 fused_ordering(77) 00:13:14.024 fused_ordering(78) 00:13:14.024 fused_ordering(79) 00:13:14.024 fused_ordering(80) 00:13:14.024 fused_ordering(81) 00:13:14.024 fused_ordering(82) 00:13:14.024 fused_ordering(83) 00:13:14.024 fused_ordering(84) 00:13:14.024 fused_ordering(85) 00:13:14.024 fused_ordering(86) 00:13:14.024 fused_ordering(87) 00:13:14.024 fused_ordering(88) 00:13:14.024 fused_ordering(89) 00:13:14.024 fused_ordering(90) 00:13:14.024 fused_ordering(91) 00:13:14.024 fused_ordering(92) 00:13:14.024 fused_ordering(93) 00:13:14.024 fused_ordering(94) 00:13:14.024 fused_ordering(95) 00:13:14.024 fused_ordering(96) 00:13:14.024 fused_ordering(97) 00:13:14.024 fused_ordering(98) 00:13:14.024 fused_ordering(99) 00:13:14.024 fused_ordering(100) 00:13:14.024 fused_ordering(101) 00:13:14.024 fused_ordering(102) 00:13:14.024 fused_ordering(103) 00:13:14.024 fused_ordering(104) 00:13:14.024 fused_ordering(105) 00:13:14.024 fused_ordering(106) 00:13:14.024 fused_ordering(107) 00:13:14.024 fused_ordering(108) 00:13:14.024 fused_ordering(109) 00:13:14.024 fused_ordering(110) 00:13:14.024 fused_ordering(111) 00:13:14.024 fused_ordering(112) 00:13:14.024 fused_ordering(113) 00:13:14.024 fused_ordering(114) 00:13:14.024 fused_ordering(115) 00:13:14.024 fused_ordering(116) 00:13:14.024 fused_ordering(117) 00:13:14.024 fused_ordering(118) 00:13:14.024 fused_ordering(119) 00:13:14.024 fused_ordering(120) 00:13:14.024 fused_ordering(121) 00:13:14.024 fused_ordering(122) 00:13:14.024 fused_ordering(123) 00:13:14.024 fused_ordering(124) 00:13:14.024 fused_ordering(125) 00:13:14.024 fused_ordering(126) 00:13:14.024 fused_ordering(127) 00:13:14.024 fused_ordering(128) 00:13:14.024 fused_ordering(129) 00:13:14.024 fused_ordering(130) 00:13:14.024 fused_ordering(131) 00:13:14.024 fused_ordering(132) 00:13:14.024 fused_ordering(133) 00:13:14.024 fused_ordering(134) 00:13:14.024 fused_ordering(135) 00:13:14.024 fused_ordering(136) 00:13:14.024 fused_ordering(137) 00:13:14.025 fused_ordering(138) 00:13:14.025 fused_ordering(139) 00:13:14.025 fused_ordering(140) 00:13:14.025 fused_ordering(141) 00:13:14.025 fused_ordering(142) 00:13:14.025 fused_ordering(143) 00:13:14.025 fused_ordering(144) 00:13:14.025 fused_ordering(145) 00:13:14.025 fused_ordering(146) 00:13:14.025 fused_ordering(147) 00:13:14.025 fused_ordering(148) 00:13:14.025 fused_ordering(149) 00:13:14.025 fused_ordering(150) 00:13:14.025 fused_ordering(151) 00:13:14.025 fused_ordering(152) 00:13:14.025 fused_ordering(153) 00:13:14.025 fused_ordering(154) 00:13:14.025 fused_ordering(155) 00:13:14.025 fused_ordering(156) 00:13:14.025 fused_ordering(157) 00:13:14.025 fused_ordering(158) 00:13:14.025 fused_ordering(159) 00:13:14.025 fused_ordering(160) 00:13:14.025 fused_ordering(161) 00:13:14.025 fused_ordering(162) 00:13:14.025 fused_ordering(163) 00:13:14.025 fused_ordering(164) 00:13:14.025 fused_ordering(165) 00:13:14.025 fused_ordering(166) 00:13:14.025 fused_ordering(167) 00:13:14.025 fused_ordering(168) 00:13:14.025 fused_ordering(169) 00:13:14.025 fused_ordering(170) 00:13:14.025 fused_ordering(171) 00:13:14.025 fused_ordering(172) 00:13:14.025 fused_ordering(173) 00:13:14.025 fused_ordering(174) 00:13:14.025 fused_ordering(175) 00:13:14.025 fused_ordering(176) 00:13:14.025 fused_ordering(177) 00:13:14.025 fused_ordering(178) 00:13:14.025 fused_ordering(179) 00:13:14.025 fused_ordering(180) 00:13:14.025 fused_ordering(181) 00:13:14.025 fused_ordering(182) 00:13:14.025 fused_ordering(183) 00:13:14.025 fused_ordering(184) 00:13:14.025 fused_ordering(185) 00:13:14.025 fused_ordering(186) 00:13:14.025 fused_ordering(187) 00:13:14.025 fused_ordering(188) 00:13:14.025 fused_ordering(189) 00:13:14.025 fused_ordering(190) 00:13:14.025 fused_ordering(191) 00:13:14.025 fused_ordering(192) 00:13:14.025 fused_ordering(193) 00:13:14.025 fused_ordering(194) 00:13:14.025 fused_ordering(195) 00:13:14.025 fused_ordering(196) 00:13:14.025 fused_ordering(197) 00:13:14.025 fused_ordering(198) 00:13:14.025 fused_ordering(199) 00:13:14.025 fused_ordering(200) 00:13:14.025 fused_ordering(201) 00:13:14.025 fused_ordering(202) 00:13:14.025 fused_ordering(203) 00:13:14.025 fused_ordering(204) 00:13:14.025 fused_ordering(205) 00:13:14.284 fused_ordering(206) 00:13:14.284 fused_ordering(207) 00:13:14.284 fused_ordering(208) 00:13:14.284 fused_ordering(209) 00:13:14.284 fused_ordering(210) 00:13:14.284 fused_ordering(211) 00:13:14.284 fused_ordering(212) 00:13:14.284 fused_ordering(213) 00:13:14.284 fused_ordering(214) 00:13:14.284 fused_ordering(215) 00:13:14.284 fused_ordering(216) 00:13:14.284 fused_ordering(217) 00:13:14.284 fused_ordering(218) 00:13:14.284 fused_ordering(219) 00:13:14.284 fused_ordering(220) 00:13:14.284 fused_ordering(221) 00:13:14.284 fused_ordering(222) 00:13:14.284 fused_ordering(223) 00:13:14.284 fused_ordering(224) 00:13:14.284 fused_ordering(225) 00:13:14.284 fused_ordering(226) 00:13:14.284 fused_ordering(227) 00:13:14.284 fused_ordering(228) 00:13:14.284 fused_ordering(229) 00:13:14.284 fused_ordering(230) 00:13:14.284 fused_ordering(231) 00:13:14.284 fused_ordering(232) 00:13:14.284 fused_ordering(233) 00:13:14.284 fused_ordering(234) 00:13:14.284 fused_ordering(235) 00:13:14.284 fused_ordering(236) 00:13:14.284 fused_ordering(237) 00:13:14.284 fused_ordering(238) 00:13:14.284 fused_ordering(239) 00:13:14.284 fused_ordering(240) 00:13:14.284 fused_ordering(241) 00:13:14.284 fused_ordering(242) 00:13:14.284 fused_ordering(243) 00:13:14.284 fused_ordering(244) 00:13:14.284 fused_ordering(245) 00:13:14.284 fused_ordering(246) 00:13:14.284 fused_ordering(247) 00:13:14.284 fused_ordering(248) 00:13:14.284 fused_ordering(249) 00:13:14.284 fused_ordering(250) 00:13:14.284 fused_ordering(251) 00:13:14.284 fused_ordering(252) 00:13:14.284 fused_ordering(253) 00:13:14.284 fused_ordering(254) 00:13:14.284 fused_ordering(255) 00:13:14.284 fused_ordering(256) 00:13:14.284 fused_ordering(257) 00:13:14.284 fused_ordering(258) 00:13:14.284 fused_ordering(259) 00:13:14.284 fused_ordering(260) 00:13:14.284 fused_ordering(261) 00:13:14.284 fused_ordering(262) 00:13:14.284 fused_ordering(263) 00:13:14.284 fused_ordering(264) 00:13:14.284 fused_ordering(265) 00:13:14.284 fused_ordering(266) 00:13:14.284 fused_ordering(267) 00:13:14.284 fused_ordering(268) 00:13:14.284 fused_ordering(269) 00:13:14.284 fused_ordering(270) 00:13:14.284 fused_ordering(271) 00:13:14.284 fused_ordering(272) 00:13:14.284 fused_ordering(273) 00:13:14.284 fused_ordering(274) 00:13:14.284 fused_ordering(275) 00:13:14.284 fused_ordering(276) 00:13:14.284 fused_ordering(277) 00:13:14.284 fused_ordering(278) 00:13:14.284 fused_ordering(279) 00:13:14.284 fused_ordering(280) 00:13:14.284 fused_ordering(281) 00:13:14.284 fused_ordering(282) 00:13:14.284 fused_ordering(283) 00:13:14.284 fused_ordering(284) 00:13:14.284 fused_ordering(285) 00:13:14.284 fused_ordering(286) 00:13:14.284 fused_ordering(287) 00:13:14.284 fused_ordering(288) 00:13:14.284 fused_ordering(289) 00:13:14.284 fused_ordering(290) 00:13:14.284 fused_ordering(291) 00:13:14.284 fused_ordering(292) 00:13:14.284 fused_ordering(293) 00:13:14.284 fused_ordering(294) 00:13:14.284 fused_ordering(295) 00:13:14.284 fused_ordering(296) 00:13:14.284 fused_ordering(297) 00:13:14.284 fused_ordering(298) 00:13:14.284 fused_ordering(299) 00:13:14.284 fused_ordering(300) 00:13:14.284 fused_ordering(301) 00:13:14.284 fused_ordering(302) 00:13:14.284 fused_ordering(303) 00:13:14.284 fused_ordering(304) 00:13:14.284 fused_ordering(305) 00:13:14.284 fused_ordering(306) 00:13:14.284 fused_ordering(307) 00:13:14.284 fused_ordering(308) 00:13:14.284 fused_ordering(309) 00:13:14.284 fused_ordering(310) 00:13:14.284 fused_ordering(311) 00:13:14.284 fused_ordering(312) 00:13:14.284 fused_ordering(313) 00:13:14.284 fused_ordering(314) 00:13:14.284 fused_ordering(315) 00:13:14.284 fused_ordering(316) 00:13:14.284 fused_ordering(317) 00:13:14.284 fused_ordering(318) 00:13:14.284 fused_ordering(319) 00:13:14.284 fused_ordering(320) 00:13:14.284 fused_ordering(321) 00:13:14.284 fused_ordering(322) 00:13:14.284 fused_ordering(323) 00:13:14.284 fused_ordering(324) 00:13:14.284 fused_ordering(325) 00:13:14.284 fused_ordering(326) 00:13:14.284 fused_ordering(327) 00:13:14.284 fused_ordering(328) 00:13:14.284 fused_ordering(329) 00:13:14.284 fused_ordering(330) 00:13:14.284 fused_ordering(331) 00:13:14.284 fused_ordering(332) 00:13:14.284 fused_ordering(333) 00:13:14.284 fused_ordering(334) 00:13:14.284 fused_ordering(335) 00:13:14.284 fused_ordering(336) 00:13:14.284 fused_ordering(337) 00:13:14.285 fused_ordering(338) 00:13:14.285 fused_ordering(339) 00:13:14.285 fused_ordering(340) 00:13:14.285 fused_ordering(341) 00:13:14.285 fused_ordering(342) 00:13:14.285 fused_ordering(343) 00:13:14.285 fused_ordering(344) 00:13:14.285 fused_ordering(345) 00:13:14.285 fused_ordering(346) 00:13:14.285 fused_ordering(347) 00:13:14.285 fused_ordering(348) 00:13:14.285 fused_ordering(349) 00:13:14.285 fused_ordering(350) 00:13:14.285 fused_ordering(351) 00:13:14.285 fused_ordering(352) 00:13:14.285 fused_ordering(353) 00:13:14.285 fused_ordering(354) 00:13:14.285 fused_ordering(355) 00:13:14.285 fused_ordering(356) 00:13:14.285 fused_ordering(357) 00:13:14.285 fused_ordering(358) 00:13:14.285 fused_ordering(359) 00:13:14.285 fused_ordering(360) 00:13:14.285 fused_ordering(361) 00:13:14.285 fused_ordering(362) 00:13:14.285 fused_ordering(363) 00:13:14.285 fused_ordering(364) 00:13:14.285 fused_ordering(365) 00:13:14.285 fused_ordering(366) 00:13:14.285 fused_ordering(367) 00:13:14.285 fused_ordering(368) 00:13:14.285 fused_ordering(369) 00:13:14.285 fused_ordering(370) 00:13:14.285 fused_ordering(371) 00:13:14.285 fused_ordering(372) 00:13:14.285 fused_ordering(373) 00:13:14.285 fused_ordering(374) 00:13:14.285 fused_ordering(375) 00:13:14.285 fused_ordering(376) 00:13:14.285 fused_ordering(377) 00:13:14.285 fused_ordering(378) 00:13:14.285 fused_ordering(379) 00:13:14.285 fused_ordering(380) 00:13:14.285 fused_ordering(381) 00:13:14.285 fused_ordering(382) 00:13:14.285 fused_ordering(383) 00:13:14.285 fused_ordering(384) 00:13:14.285 fused_ordering(385) 00:13:14.285 fused_ordering(386) 00:13:14.285 fused_ordering(387) 00:13:14.285 fused_ordering(388) 00:13:14.285 fused_ordering(389) 00:13:14.285 fused_ordering(390) 00:13:14.285 fused_ordering(391) 00:13:14.285 fused_ordering(392) 00:13:14.285 fused_ordering(393) 00:13:14.285 fused_ordering(394) 00:13:14.285 fused_ordering(395) 00:13:14.285 fused_ordering(396) 00:13:14.285 fused_ordering(397) 00:13:14.285 fused_ordering(398) 00:13:14.285 fused_ordering(399) 00:13:14.285 fused_ordering(400) 00:13:14.285 fused_ordering(401) 00:13:14.285 fused_ordering(402) 00:13:14.285 fused_ordering(403) 00:13:14.285 fused_ordering(404) 00:13:14.285 fused_ordering(405) 00:13:14.285 fused_ordering(406) 00:13:14.285 fused_ordering(407) 00:13:14.285 fused_ordering(408) 00:13:14.285 fused_ordering(409) 00:13:14.285 fused_ordering(410) 00:13:14.544 fused_ordering(411) 00:13:14.544 fused_ordering(412) 00:13:14.544 fused_ordering(413) 00:13:14.544 fused_ordering(414) 00:13:14.544 fused_ordering(415) 00:13:14.544 fused_ordering(416) 00:13:14.544 fused_ordering(417) 00:13:14.544 fused_ordering(418) 00:13:14.544 fused_ordering(419) 00:13:14.544 fused_ordering(420) 00:13:14.544 fused_ordering(421) 00:13:14.544 fused_ordering(422) 00:13:14.544 fused_ordering(423) 00:13:14.544 fused_ordering(424) 00:13:14.544 fused_ordering(425) 00:13:14.544 fused_ordering(426) 00:13:14.544 fused_ordering(427) 00:13:14.544 fused_ordering(428) 00:13:14.544 fused_ordering(429) 00:13:14.544 fused_ordering(430) 00:13:14.544 fused_ordering(431) 00:13:14.544 fused_ordering(432) 00:13:14.544 fused_ordering(433) 00:13:14.544 fused_ordering(434) 00:13:14.544 fused_ordering(435) 00:13:14.544 fused_ordering(436) 00:13:14.544 fused_ordering(437) 00:13:14.544 fused_ordering(438) 00:13:14.544 fused_ordering(439) 00:13:14.544 fused_ordering(440) 00:13:14.544 fused_ordering(441) 00:13:14.544 fused_ordering(442) 00:13:14.544 fused_ordering(443) 00:13:14.544 fused_ordering(444) 00:13:14.544 fused_ordering(445) 00:13:14.544 fused_ordering(446) 00:13:14.544 fused_ordering(447) 00:13:14.544 fused_ordering(448) 00:13:14.544 fused_ordering(449) 00:13:14.544 fused_ordering(450) 00:13:14.544 fused_ordering(451) 00:13:14.544 fused_ordering(452) 00:13:14.544 fused_ordering(453) 00:13:14.544 fused_ordering(454) 00:13:14.544 fused_ordering(455) 00:13:14.544 fused_ordering(456) 00:13:14.544 fused_ordering(457) 00:13:14.544 fused_ordering(458) 00:13:14.544 fused_ordering(459) 00:13:14.544 fused_ordering(460) 00:13:14.544 fused_ordering(461) 00:13:14.544 fused_ordering(462) 00:13:14.544 fused_ordering(463) 00:13:14.544 fused_ordering(464) 00:13:14.544 fused_ordering(465) 00:13:14.544 fused_ordering(466) 00:13:14.544 fused_ordering(467) 00:13:14.544 fused_ordering(468) 00:13:14.544 fused_ordering(469) 00:13:14.544 fused_ordering(470) 00:13:14.544 fused_ordering(471) 00:13:14.544 fused_ordering(472) 00:13:14.544 fused_ordering(473) 00:13:14.544 fused_ordering(474) 00:13:14.544 fused_ordering(475) 00:13:14.544 fused_ordering(476) 00:13:14.544 fused_ordering(477) 00:13:14.544 fused_ordering(478) 00:13:14.544 fused_ordering(479) 00:13:14.544 fused_ordering(480) 00:13:14.544 fused_ordering(481) 00:13:14.544 fused_ordering(482) 00:13:14.544 fused_ordering(483) 00:13:14.544 fused_ordering(484) 00:13:14.544 fused_ordering(485) 00:13:14.544 fused_ordering(486) 00:13:14.544 fused_ordering(487) 00:13:14.544 fused_ordering(488) 00:13:14.544 fused_ordering(489) 00:13:14.544 fused_ordering(490) 00:13:14.544 fused_ordering(491) 00:13:14.544 fused_ordering(492) 00:13:14.544 fused_ordering(493) 00:13:14.544 fused_ordering(494) 00:13:14.544 fused_ordering(495) 00:13:14.544 fused_ordering(496) 00:13:14.544 fused_ordering(497) 00:13:14.544 fused_ordering(498) 00:13:14.544 fused_ordering(499) 00:13:14.544 fused_ordering(500) 00:13:14.544 fused_ordering(501) 00:13:14.544 fused_ordering(502) 00:13:14.544 fused_ordering(503) 00:13:14.544 fused_ordering(504) 00:13:14.544 fused_ordering(505) 00:13:14.544 fused_ordering(506) 00:13:14.544 fused_ordering(507) 00:13:14.544 fused_ordering(508) 00:13:14.544 fused_ordering(509) 00:13:14.544 fused_ordering(510) 00:13:14.544 fused_ordering(511) 00:13:14.544 fused_ordering(512) 00:13:14.544 fused_ordering(513) 00:13:14.544 fused_ordering(514) 00:13:14.544 fused_ordering(515) 00:13:14.544 fused_ordering(516) 00:13:14.544 fused_ordering(517) 00:13:14.544 fused_ordering(518) 00:13:14.544 fused_ordering(519) 00:13:14.544 fused_ordering(520) 00:13:14.544 fused_ordering(521) 00:13:14.544 fused_ordering(522) 00:13:14.544 fused_ordering(523) 00:13:14.544 fused_ordering(524) 00:13:14.544 fused_ordering(525) 00:13:14.544 fused_ordering(526) 00:13:14.544 fused_ordering(527) 00:13:14.544 fused_ordering(528) 00:13:14.544 fused_ordering(529) 00:13:14.544 fused_ordering(530) 00:13:14.544 fused_ordering(531) 00:13:14.544 fused_ordering(532) 00:13:14.544 fused_ordering(533) 00:13:14.544 fused_ordering(534) 00:13:14.544 fused_ordering(535) 00:13:14.544 fused_ordering(536) 00:13:14.544 fused_ordering(537) 00:13:14.544 fused_ordering(538) 00:13:14.544 fused_ordering(539) 00:13:14.544 fused_ordering(540) 00:13:14.544 fused_ordering(541) 00:13:14.544 fused_ordering(542) 00:13:14.544 fused_ordering(543) 00:13:14.544 fused_ordering(544) 00:13:14.544 fused_ordering(545) 00:13:14.544 fused_ordering(546) 00:13:14.544 fused_ordering(547) 00:13:14.544 fused_ordering(548) 00:13:14.544 fused_ordering(549) 00:13:14.544 fused_ordering(550) 00:13:14.544 fused_ordering(551) 00:13:14.544 fused_ordering(552) 00:13:14.544 fused_ordering(553) 00:13:14.544 fused_ordering(554) 00:13:14.544 fused_ordering(555) 00:13:14.544 fused_ordering(556) 00:13:14.544 fused_ordering(557) 00:13:14.544 fused_ordering(558) 00:13:14.544 fused_ordering(559) 00:13:14.544 fused_ordering(560) 00:13:14.544 fused_ordering(561) 00:13:14.544 fused_ordering(562) 00:13:14.544 fused_ordering(563) 00:13:14.544 fused_ordering(564) 00:13:14.544 fused_ordering(565) 00:13:14.544 fused_ordering(566) 00:13:14.544 fused_ordering(567) 00:13:14.544 fused_ordering(568) 00:13:14.545 fused_ordering(569) 00:13:14.545 fused_ordering(570) 00:13:14.545 fused_ordering(571) 00:13:14.545 fused_ordering(572) 00:13:14.545 fused_ordering(573) 00:13:14.545 fused_ordering(574) 00:13:14.545 fused_ordering(575) 00:13:14.545 fused_ordering(576) 00:13:14.545 fused_ordering(577) 00:13:14.545 fused_ordering(578) 00:13:14.545 fused_ordering(579) 00:13:14.545 fused_ordering(580) 00:13:14.545 fused_ordering(581) 00:13:14.545 fused_ordering(582) 00:13:14.545 fused_ordering(583) 00:13:14.545 fused_ordering(584) 00:13:14.545 fused_ordering(585) 00:13:14.545 fused_ordering(586) 00:13:14.545 fused_ordering(587) 00:13:14.545 fused_ordering(588) 00:13:14.545 fused_ordering(589) 00:13:14.545 fused_ordering(590) 00:13:14.545 fused_ordering(591) 00:13:14.545 fused_ordering(592) 00:13:14.545 fused_ordering(593) 00:13:14.545 fused_ordering(594) 00:13:14.545 fused_ordering(595) 00:13:14.545 fused_ordering(596) 00:13:14.545 fused_ordering(597) 00:13:14.545 fused_ordering(598) 00:13:14.545 fused_ordering(599) 00:13:14.545 fused_ordering(600) 00:13:14.545 fused_ordering(601) 00:13:14.545 fused_ordering(602) 00:13:14.545 fused_ordering(603) 00:13:14.545 fused_ordering(604) 00:13:14.545 fused_ordering(605) 00:13:14.545 fused_ordering(606) 00:13:14.545 fused_ordering(607) 00:13:14.545 fused_ordering(608) 00:13:14.545 fused_ordering(609) 00:13:14.545 fused_ordering(610) 00:13:14.545 fused_ordering(611) 00:13:14.545 fused_ordering(612) 00:13:14.545 fused_ordering(613) 00:13:14.545 fused_ordering(614) 00:13:14.545 fused_ordering(615) 00:13:15.112 fused_ordering(616) 00:13:15.112 fused_ordering(617) 00:13:15.112 fused_ordering(618) 00:13:15.112 fused_ordering(619) 00:13:15.112 fused_ordering(620) 00:13:15.112 fused_ordering(621) 00:13:15.112 fused_ordering(622) 00:13:15.112 fused_ordering(623) 00:13:15.112 fused_ordering(624) 00:13:15.112 fused_ordering(625) 00:13:15.112 fused_ordering(626) 00:13:15.112 fused_ordering(627) 00:13:15.112 fused_ordering(628) 00:13:15.112 fused_ordering(629) 00:13:15.112 fused_ordering(630) 00:13:15.112 fused_ordering(631) 00:13:15.112 fused_ordering(632) 00:13:15.112 fused_ordering(633) 00:13:15.112 fused_ordering(634) 00:13:15.112 fused_ordering(635) 00:13:15.112 fused_ordering(636) 00:13:15.112 fused_ordering(637) 00:13:15.112 fused_ordering(638) 00:13:15.112 fused_ordering(639) 00:13:15.112 fused_ordering(640) 00:13:15.112 fused_ordering(641) 00:13:15.112 fused_ordering(642) 00:13:15.112 fused_ordering(643) 00:13:15.112 fused_ordering(644) 00:13:15.112 fused_ordering(645) 00:13:15.112 fused_ordering(646) 00:13:15.112 fused_ordering(647) 00:13:15.112 fused_ordering(648) 00:13:15.112 fused_ordering(649) 00:13:15.112 fused_ordering(650) 00:13:15.112 fused_ordering(651) 00:13:15.112 fused_ordering(652) 00:13:15.112 fused_ordering(653) 00:13:15.112 fused_ordering(654) 00:13:15.112 fused_ordering(655) 00:13:15.112 fused_ordering(656) 00:13:15.112 fused_ordering(657) 00:13:15.112 fused_ordering(658) 00:13:15.112 fused_ordering(659) 00:13:15.112 fused_ordering(660) 00:13:15.112 fused_ordering(661) 00:13:15.112 fused_ordering(662) 00:13:15.112 fused_ordering(663) 00:13:15.112 fused_ordering(664) 00:13:15.112 fused_ordering(665) 00:13:15.112 fused_ordering(666) 00:13:15.112 fused_ordering(667) 00:13:15.112 fused_ordering(668) 00:13:15.112 fused_ordering(669) 00:13:15.112 fused_ordering(670) 00:13:15.112 fused_ordering(671) 00:13:15.112 fused_ordering(672) 00:13:15.112 fused_ordering(673) 00:13:15.112 fused_ordering(674) 00:13:15.112 fused_ordering(675) 00:13:15.112 fused_ordering(676) 00:13:15.112 fused_ordering(677) 00:13:15.112 fused_ordering(678) 00:13:15.112 fused_ordering(679) 00:13:15.112 fused_ordering(680) 00:13:15.112 fused_ordering(681) 00:13:15.112 fused_ordering(682) 00:13:15.112 fused_ordering(683) 00:13:15.112 fused_ordering(684) 00:13:15.112 fused_ordering(685) 00:13:15.112 fused_ordering(686) 00:13:15.112 fused_ordering(687) 00:13:15.112 fused_ordering(688) 00:13:15.112 fused_ordering(689) 00:13:15.112 fused_ordering(690) 00:13:15.112 fused_ordering(691) 00:13:15.112 fused_ordering(692) 00:13:15.112 fused_ordering(693) 00:13:15.112 fused_ordering(694) 00:13:15.112 fused_ordering(695) 00:13:15.112 fused_ordering(696) 00:13:15.112 fused_ordering(697) 00:13:15.112 fused_ordering(698) 00:13:15.112 fused_ordering(699) 00:13:15.112 fused_ordering(700) 00:13:15.112 fused_ordering(701) 00:13:15.112 fused_ordering(702) 00:13:15.112 fused_ordering(703) 00:13:15.113 fused_ordering(704) 00:13:15.113 fused_ordering(705) 00:13:15.113 fused_ordering(706) 00:13:15.113 fused_ordering(707) 00:13:15.113 fused_ordering(708) 00:13:15.113 fused_ordering(709) 00:13:15.113 fused_ordering(710) 00:13:15.113 fused_ordering(711) 00:13:15.113 fused_ordering(712) 00:13:15.113 fused_ordering(713) 00:13:15.113 fused_ordering(714) 00:13:15.113 fused_ordering(715) 00:13:15.113 fused_ordering(716) 00:13:15.113 fused_ordering(717) 00:13:15.113 fused_ordering(718) 00:13:15.113 fused_ordering(719) 00:13:15.113 fused_ordering(720) 00:13:15.113 fused_ordering(721) 00:13:15.113 fused_ordering(722) 00:13:15.113 fused_ordering(723) 00:13:15.113 fused_ordering(724) 00:13:15.113 fused_ordering(725) 00:13:15.113 fused_ordering(726) 00:13:15.113 fused_ordering(727) 00:13:15.113 fused_ordering(728) 00:13:15.113 fused_ordering(729) 00:13:15.113 fused_ordering(730) 00:13:15.113 fused_ordering(731) 00:13:15.113 fused_ordering(732) 00:13:15.113 fused_ordering(733) 00:13:15.113 fused_ordering(734) 00:13:15.113 fused_ordering(735) 00:13:15.113 fused_ordering(736) 00:13:15.113 fused_ordering(737) 00:13:15.113 fused_ordering(738) 00:13:15.113 fused_ordering(739) 00:13:15.113 fused_ordering(740) 00:13:15.113 fused_ordering(741) 00:13:15.113 fused_ordering(742) 00:13:15.113 fused_ordering(743) 00:13:15.113 fused_ordering(744) 00:13:15.113 fused_ordering(745) 00:13:15.113 fused_ordering(746) 00:13:15.113 fused_ordering(747) 00:13:15.113 fused_ordering(748) 00:13:15.113 fused_ordering(749) 00:13:15.113 fused_ordering(750) 00:13:15.113 fused_ordering(751) 00:13:15.113 fused_ordering(752) 00:13:15.113 fused_ordering(753) 00:13:15.113 fused_ordering(754) 00:13:15.113 fused_ordering(755) 00:13:15.113 fused_ordering(756) 00:13:15.113 fused_ordering(757) 00:13:15.113 fused_ordering(758) 00:13:15.113 fused_ordering(759) 00:13:15.113 fused_ordering(760) 00:13:15.113 fused_ordering(761) 00:13:15.113 fused_ordering(762) 00:13:15.113 fused_ordering(763) 00:13:15.113 fused_ordering(764) 00:13:15.113 fused_ordering(765) 00:13:15.113 fused_ordering(766) 00:13:15.113 fused_ordering(767) 00:13:15.113 fused_ordering(768) 00:13:15.113 fused_ordering(769) 00:13:15.113 fused_ordering(770) 00:13:15.113 fused_ordering(771) 00:13:15.113 fused_ordering(772) 00:13:15.113 fused_ordering(773) 00:13:15.113 fused_ordering(774) 00:13:15.113 fused_ordering(775) 00:13:15.113 fused_ordering(776) 00:13:15.113 fused_ordering(777) 00:13:15.113 fused_ordering(778) 00:13:15.113 fused_ordering(779) 00:13:15.113 fused_ordering(780) 00:13:15.113 fused_ordering(781) 00:13:15.113 fused_ordering(782) 00:13:15.113 fused_ordering(783) 00:13:15.113 fused_ordering(784) 00:13:15.113 fused_ordering(785) 00:13:15.113 fused_ordering(786) 00:13:15.113 fused_ordering(787) 00:13:15.113 fused_ordering(788) 00:13:15.113 fused_ordering(789) 00:13:15.113 fused_ordering(790) 00:13:15.113 fused_ordering(791) 00:13:15.113 fused_ordering(792) 00:13:15.113 fused_ordering(793) 00:13:15.113 fused_ordering(794) 00:13:15.113 fused_ordering(795) 00:13:15.113 fused_ordering(796) 00:13:15.113 fused_ordering(797) 00:13:15.113 fused_ordering(798) 00:13:15.113 fused_ordering(799) 00:13:15.113 fused_ordering(800) 00:13:15.113 fused_ordering(801) 00:13:15.113 fused_ordering(802) 00:13:15.113 fused_ordering(803) 00:13:15.113 fused_ordering(804) 00:13:15.113 fused_ordering(805) 00:13:15.113 fused_ordering(806) 00:13:15.113 fused_ordering(807) 00:13:15.113 fused_ordering(808) 00:13:15.113 fused_ordering(809) 00:13:15.113 fused_ordering(810) 00:13:15.113 fused_ordering(811) 00:13:15.113 fused_ordering(812) 00:13:15.113 fused_ordering(813) 00:13:15.113 fused_ordering(814) 00:13:15.113 fused_ordering(815) 00:13:15.113 fused_ordering(816) 00:13:15.113 fused_ordering(817) 00:13:15.113 fused_ordering(818) 00:13:15.113 fused_ordering(819) 00:13:15.113 fused_ordering(820) 00:13:15.680 fused_ordering(821) 00:13:15.680 fused_ordering(822) 00:13:15.680 fused_ordering(823) 00:13:15.680 fused_ordering(824) 00:13:15.680 fused_ordering(825) 00:13:15.680 fused_ordering(826) 00:13:15.680 fused_ordering(827) 00:13:15.680 fused_ordering(828) 00:13:15.680 fused_ordering(829) 00:13:15.680 fused_ordering(830) 00:13:15.680 fused_ordering(831) 00:13:15.680 fused_ordering(832) 00:13:15.680 fused_ordering(833) 00:13:15.680 fused_ordering(834) 00:13:15.680 fused_ordering(835) 00:13:15.680 fused_ordering(836) 00:13:15.680 fused_ordering(837) 00:13:15.680 fused_ordering(838) 00:13:15.680 fused_ordering(839) 00:13:15.680 fused_ordering(840) 00:13:15.680 fused_ordering(841) 00:13:15.680 fused_ordering(842) 00:13:15.680 fused_ordering(843) 00:13:15.680 fused_ordering(844) 00:13:15.680 fused_ordering(845) 00:13:15.680 fused_ordering(846) 00:13:15.680 fused_ordering(847) 00:13:15.680 fused_ordering(848) 00:13:15.680 fused_ordering(849) 00:13:15.680 fused_ordering(850) 00:13:15.680 fused_ordering(851) 00:13:15.680 fused_ordering(852) 00:13:15.680 fused_ordering(853) 00:13:15.680 fused_ordering(854) 00:13:15.680 fused_ordering(855) 00:13:15.680 fused_ordering(856) 00:13:15.680 fused_ordering(857) 00:13:15.680 fused_ordering(858) 00:13:15.680 fused_ordering(859) 00:13:15.680 fused_ordering(860) 00:13:15.680 fused_ordering(861) 00:13:15.680 fused_ordering(862) 00:13:15.680 fused_ordering(863) 00:13:15.680 fused_ordering(864) 00:13:15.680 fused_ordering(865) 00:13:15.680 fused_ordering(866) 00:13:15.680 fused_ordering(867) 00:13:15.680 fused_ordering(868) 00:13:15.680 fused_ordering(869) 00:13:15.680 fused_ordering(870) 00:13:15.680 fused_ordering(871) 00:13:15.680 fused_ordering(872) 00:13:15.680 fused_ordering(873) 00:13:15.680 fused_ordering(874) 00:13:15.680 fused_ordering(875) 00:13:15.680 fused_ordering(876) 00:13:15.680 fused_ordering(877) 00:13:15.680 fused_ordering(878) 00:13:15.680 fused_ordering(879) 00:13:15.680 fused_ordering(880) 00:13:15.680 fused_ordering(881) 00:13:15.680 fused_ordering(882) 00:13:15.680 fused_ordering(883) 00:13:15.680 fused_ordering(884) 00:13:15.680 fused_ordering(885) 00:13:15.680 fused_ordering(886) 00:13:15.680 fused_ordering(887) 00:13:15.680 fused_ordering(888) 00:13:15.680 fused_ordering(889) 00:13:15.680 fused_ordering(890) 00:13:15.680 fused_ordering(891) 00:13:15.680 fused_ordering(892) 00:13:15.680 fused_ordering(893) 00:13:15.680 fused_ordering(894) 00:13:15.680 fused_ordering(895) 00:13:15.680 fused_ordering(896) 00:13:15.680 fused_ordering(897) 00:13:15.680 fused_ordering(898) 00:13:15.680 fused_ordering(899) 00:13:15.680 fused_ordering(900) 00:13:15.680 fused_ordering(901) 00:13:15.680 fused_ordering(902) 00:13:15.680 fused_ordering(903) 00:13:15.680 fused_ordering(904) 00:13:15.680 fused_ordering(905) 00:13:15.680 fused_ordering(906) 00:13:15.680 fused_ordering(907) 00:13:15.680 fused_ordering(908) 00:13:15.680 fused_ordering(909) 00:13:15.680 fused_ordering(910) 00:13:15.680 fused_ordering(911) 00:13:15.680 fused_ordering(912) 00:13:15.680 fused_ordering(913) 00:13:15.680 fused_ordering(914) 00:13:15.680 fused_ordering(915) 00:13:15.680 fused_ordering(916) 00:13:15.680 fused_ordering(917) 00:13:15.680 fused_ordering(918) 00:13:15.680 fused_ordering(919) 00:13:15.680 fused_ordering(920) 00:13:15.680 fused_ordering(921) 00:13:15.680 fused_ordering(922) 00:13:15.680 fused_ordering(923) 00:13:15.680 fused_ordering(924) 00:13:15.680 fused_ordering(925) 00:13:15.680 fused_ordering(926) 00:13:15.680 fused_ordering(927) 00:13:15.680 fused_ordering(928) 00:13:15.680 fused_ordering(929) 00:13:15.680 fused_ordering(930) 00:13:15.680 fused_ordering(931) 00:13:15.680 fused_ordering(932) 00:13:15.680 fused_ordering(933) 00:13:15.680 fused_ordering(934) 00:13:15.680 fused_ordering(935) 00:13:15.680 fused_ordering(936) 00:13:15.680 fused_ordering(937) 00:13:15.680 fused_ordering(938) 00:13:15.680 fused_ordering(939) 00:13:15.680 fused_ordering(940) 00:13:15.680 fused_ordering(941) 00:13:15.680 fused_ordering(942) 00:13:15.680 fused_ordering(943) 00:13:15.680 fused_ordering(944) 00:13:15.680 fused_ordering(945) 00:13:15.680 fused_ordering(946) 00:13:15.680 fused_ordering(947) 00:13:15.680 fused_ordering(948) 00:13:15.680 fused_ordering(949) 00:13:15.680 fused_ordering(950) 00:13:15.680 fused_ordering(951) 00:13:15.680 fused_ordering(952) 00:13:15.680 fused_ordering(953) 00:13:15.680 fused_ordering(954) 00:13:15.680 fused_ordering(955) 00:13:15.680 fused_ordering(956) 00:13:15.680 fused_ordering(957) 00:13:15.680 fused_ordering(958) 00:13:15.680 fused_ordering(959) 00:13:15.680 fused_ordering(960) 00:13:15.680 fused_ordering(961) 00:13:15.680 fused_ordering(962) 00:13:15.680 fused_ordering(963) 00:13:15.680 fused_ordering(964) 00:13:15.680 fused_ordering(965) 00:13:15.680 fused_ordering(966) 00:13:15.680 fused_ordering(967) 00:13:15.680 fused_ordering(968) 00:13:15.680 fused_ordering(969) 00:13:15.680 fused_ordering(970) 00:13:15.680 fused_ordering(971) 00:13:15.680 fused_ordering(972) 00:13:15.680 fused_ordering(973) 00:13:15.680 fused_ordering(974) 00:13:15.680 fused_ordering(975) 00:13:15.680 fused_ordering(976) 00:13:15.680 fused_ordering(977) 00:13:15.680 fused_ordering(978) 00:13:15.680 fused_ordering(979) 00:13:15.680 fused_ordering(980) 00:13:15.680 fused_ordering(981) 00:13:15.680 fused_ordering(982) 00:13:15.680 fused_ordering(983) 00:13:15.680 fused_ordering(984) 00:13:15.680 fused_ordering(985) 00:13:15.680 fused_ordering(986) 00:13:15.680 fused_ordering(987) 00:13:15.680 fused_ordering(988) 00:13:15.680 fused_ordering(989) 00:13:15.680 fused_ordering(990) 00:13:15.680 fused_ordering(991) 00:13:15.680 fused_ordering(992) 00:13:15.680 fused_ordering(993) 00:13:15.680 fused_ordering(994) 00:13:15.680 fused_ordering(995) 00:13:15.680 fused_ordering(996) 00:13:15.680 fused_ordering(997) 00:13:15.680 fused_ordering(998) 00:13:15.680 fused_ordering(999) 00:13:15.680 fused_ordering(1000) 00:13:15.680 fused_ordering(1001) 00:13:15.680 fused_ordering(1002) 00:13:15.680 fused_ordering(1003) 00:13:15.680 fused_ordering(1004) 00:13:15.680 fused_ordering(1005) 00:13:15.680 fused_ordering(1006) 00:13:15.680 fused_ordering(1007) 00:13:15.680 fused_ordering(1008) 00:13:15.680 fused_ordering(1009) 00:13:15.680 fused_ordering(1010) 00:13:15.680 fused_ordering(1011) 00:13:15.680 fused_ordering(1012) 00:13:15.680 fused_ordering(1013) 00:13:15.680 fused_ordering(1014) 00:13:15.680 fused_ordering(1015) 00:13:15.680 fused_ordering(1016) 00:13:15.680 fused_ordering(1017) 00:13:15.680 fused_ordering(1018) 00:13:15.680 fused_ordering(1019) 00:13:15.680 fused_ordering(1020) 00:13:15.680 fused_ordering(1021) 00:13:15.680 fused_ordering(1022) 00:13:15.680 fused_ordering(1023) 00:13:15.680 02:29:56 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:15.680 02:29:56 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:15.680 02:29:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:15.681 02:29:56 -- nvmf/common.sh@116 -- # sync 00:13:15.681 02:29:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:15.681 02:29:56 -- nvmf/common.sh@119 -- # set +e 00:13:15.681 02:29:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:15.681 02:29:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:15.681 rmmod nvme_tcp 00:13:15.681 rmmod nvme_fabrics 00:13:15.681 rmmod nvme_keyring 00:13:15.681 02:29:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:15.681 02:29:56 -- nvmf/common.sh@123 -- # set -e 00:13:15.681 02:29:56 -- nvmf/common.sh@124 -- # return 0 00:13:15.681 02:29:56 -- nvmf/common.sh@477 -- # '[' -n 70305 ']' 00:13:15.681 02:29:56 -- nvmf/common.sh@478 -- # killprocess 70305 00:13:15.681 02:29:56 -- common/autotest_common.sh@936 -- # '[' -z 70305 ']' 00:13:15.681 02:29:56 -- common/autotest_common.sh@940 -- # kill -0 70305 00:13:15.681 02:29:56 -- common/autotest_common.sh@941 -- # uname 00:13:15.681 02:29:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:15.681 02:29:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70305 00:13:15.681 02:29:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:15.681 02:29:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:15.681 killing process with pid 70305 00:13:15.681 02:29:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70305' 00:13:15.681 02:29:56 -- common/autotest_common.sh@955 -- # kill 70305 00:13:15.681 02:29:56 -- common/autotest_common.sh@960 -- # wait 70305 00:13:15.939 02:29:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:15.939 02:29:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:15.939 02:29:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:15.939 02:29:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.939 02:29:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:15.939 02:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.939 02:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.939 02:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.939 02:29:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:15.939 00:13:15.939 real 0m4.079s 00:13:15.939 user 0m4.566s 00:13:15.939 sys 0m1.498s 00:13:15.939 02:29:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:15.939 ************************************ 00:13:15.939 END TEST nvmf_fused_ordering 00:13:15.939 02:29:56 -- common/autotest_common.sh@10 -- # set +x 00:13:15.939 ************************************ 00:13:16.197 02:29:56 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:16.197 02:29:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:16.197 02:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.197 02:29:56 -- common/autotest_common.sh@10 -- # set +x 00:13:16.197 ************************************ 00:13:16.197 START TEST nvmf_delete_subsystem 00:13:16.197 ************************************ 00:13:16.197 02:29:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:16.197 * Looking for test storage... 00:13:16.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:16.197 02:29:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:16.197 02:29:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:16.197 02:29:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:16.197 02:29:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:16.197 02:29:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:16.197 02:29:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:16.197 02:29:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:16.197 02:29:56 -- scripts/common.sh@335 -- # IFS=.-: 00:13:16.197 02:29:56 -- scripts/common.sh@335 -- # read -ra ver1 00:13:16.197 02:29:56 -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.197 02:29:56 -- scripts/common.sh@336 -- # read -ra ver2 00:13:16.197 02:29:56 -- scripts/common.sh@337 -- # local 'op=<' 00:13:16.197 02:29:56 -- scripts/common.sh@339 -- # ver1_l=2 00:13:16.198 02:29:56 -- scripts/common.sh@340 -- # ver2_l=1 00:13:16.198 02:29:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:16.198 02:29:56 -- scripts/common.sh@343 -- # case "$op" in 00:13:16.198 02:29:56 -- scripts/common.sh@344 -- # : 1 00:13:16.198 02:29:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:16.198 02:29:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.198 02:29:56 -- scripts/common.sh@364 -- # decimal 1 00:13:16.198 02:29:56 -- scripts/common.sh@352 -- # local d=1 00:13:16.198 02:29:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.198 02:29:56 -- scripts/common.sh@354 -- # echo 1 00:13:16.198 02:29:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:16.198 02:29:56 -- scripts/common.sh@365 -- # decimal 2 00:13:16.198 02:29:56 -- scripts/common.sh@352 -- # local d=2 00:13:16.198 02:29:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.198 02:29:56 -- scripts/common.sh@354 -- # echo 2 00:13:16.198 02:29:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:16.198 02:29:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:16.198 02:29:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:16.198 02:29:56 -- scripts/common.sh@367 -- # return 0 00:13:16.198 02:29:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.198 02:29:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.198 --rc genhtml_branch_coverage=1 00:13:16.198 --rc genhtml_function_coverage=1 00:13:16.198 --rc genhtml_legend=1 00:13:16.198 --rc geninfo_all_blocks=1 00:13:16.198 --rc geninfo_unexecuted_blocks=1 00:13:16.198 00:13:16.198 ' 00:13:16.198 02:29:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.198 --rc genhtml_branch_coverage=1 00:13:16.198 --rc genhtml_function_coverage=1 00:13:16.198 --rc genhtml_legend=1 00:13:16.198 --rc geninfo_all_blocks=1 00:13:16.198 --rc geninfo_unexecuted_blocks=1 00:13:16.198 00:13:16.198 ' 00:13:16.198 02:29:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.198 --rc genhtml_branch_coverage=1 00:13:16.198 --rc genhtml_function_coverage=1 00:13:16.198 --rc genhtml_legend=1 00:13:16.198 --rc geninfo_all_blocks=1 00:13:16.198 --rc geninfo_unexecuted_blocks=1 00:13:16.198 00:13:16.198 ' 00:13:16.198 02:29:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.198 --rc genhtml_branch_coverage=1 00:13:16.198 --rc genhtml_function_coverage=1 00:13:16.198 --rc genhtml_legend=1 00:13:16.198 --rc geninfo_all_blocks=1 00:13:16.198 --rc geninfo_unexecuted_blocks=1 00:13:16.198 00:13:16.198 ' 00:13:16.198 02:29:56 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:16.198 02:29:56 -- nvmf/common.sh@7 -- # uname -s 00:13:16.198 02:29:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.198 02:29:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.198 02:29:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.198 02:29:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.198 02:29:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.198 02:29:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.198 02:29:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.198 02:29:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.198 02:29:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.456 02:29:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.456 02:29:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:13:16.456 02:29:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:13:16.456 02:29:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.456 02:29:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.456 02:29:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:16.456 02:29:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:16.456 02:29:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.456 02:29:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.456 02:29:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.456 02:29:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.456 02:29:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.456 02:29:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.456 02:29:56 -- paths/export.sh@5 -- # export PATH 00:13:16.456 02:29:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.456 02:29:56 -- nvmf/common.sh@46 -- # : 0 00:13:16.456 02:29:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:16.456 02:29:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:16.456 02:29:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:16.456 02:29:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.456 02:29:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.456 02:29:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:16.456 02:29:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:16.456 02:29:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:16.456 02:29:56 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:16.456 02:29:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:16.456 02:29:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.456 02:29:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:16.456 02:29:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:16.456 02:29:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:16.456 02:29:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.456 02:29:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.456 02:29:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.457 02:29:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:16.457 02:29:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:16.457 02:29:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:16.457 02:29:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:16.457 02:29:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:16.457 02:29:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:16.457 02:29:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.457 02:29:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.457 02:29:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:16.457 02:29:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:16.457 02:29:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:16.457 02:29:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:16.457 02:29:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:16.457 02:29:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.457 02:29:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:16.457 02:29:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:16.457 02:29:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:16.457 02:29:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:16.457 02:29:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:16.457 02:29:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:16.457 Cannot find device "nvmf_tgt_br" 00:13:16.457 02:29:56 -- nvmf/common.sh@154 -- # true 00:13:16.457 02:29:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:16.457 Cannot find device "nvmf_tgt_br2" 00:13:16.457 02:29:56 -- nvmf/common.sh@155 -- # true 00:13:16.457 02:29:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:16.457 02:29:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:16.457 Cannot find device "nvmf_tgt_br" 00:13:16.457 02:29:56 -- nvmf/common.sh@157 -- # true 00:13:16.457 02:29:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:16.457 Cannot find device "nvmf_tgt_br2" 00:13:16.457 02:29:56 -- nvmf/common.sh@158 -- # true 00:13:16.457 02:29:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:16.457 02:29:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:16.457 02:29:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:16.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.457 02:29:56 -- nvmf/common.sh@161 -- # true 00:13:16.457 02:29:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:16.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:16.457 02:29:57 -- nvmf/common.sh@162 -- # true 00:13:16.457 02:29:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:16.457 02:29:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:16.457 02:29:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:16.457 02:29:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:16.457 02:29:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:16.457 02:29:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:16.457 02:29:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:16.457 02:29:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:16.457 02:29:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:16.457 02:29:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:16.457 02:29:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:16.457 02:29:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:16.716 02:29:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:16.716 02:29:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:16.716 02:29:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:16.716 02:29:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:16.716 02:29:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:16.716 02:29:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:16.716 02:29:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:16.716 02:29:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:16.716 02:29:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.716 02:29:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.716 02:29:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.716 02:29:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:16.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:13:16.716 00:13:16.716 --- 10.0.0.2 ping statistics --- 00:13:16.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.716 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:16.716 02:29:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:16.716 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.716 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:16.716 00:13:16.716 --- 10.0.0.3 ping statistics --- 00:13:16.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.716 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:16.716 02:29:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:13:16.716 00:13:16.716 --- 10.0.0.1 ping statistics --- 00:13:16.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.716 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:16.716 02:29:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.716 02:29:57 -- nvmf/common.sh@421 -- # return 0 00:13:16.716 02:29:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:16.716 02:29:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.716 02:29:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:16.716 02:29:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:16.716 02:29:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.716 02:29:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:16.716 02:29:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:16.716 02:29:57 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:16.716 02:29:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:16.716 02:29:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:16.716 02:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:16.716 02:29:57 -- nvmf/common.sh@469 -- # nvmfpid=70574 00:13:16.716 02:29:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:16.716 02:29:57 -- nvmf/common.sh@470 -- # waitforlisten 70574 00:13:16.716 02:29:57 -- common/autotest_common.sh@829 -- # '[' -z 70574 ']' 00:13:16.716 02:29:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.716 02:29:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:16.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.716 02:29:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.716 02:29:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:16.716 02:29:57 -- common/autotest_common.sh@10 -- # set +x 00:13:16.716 [2024-11-21 02:29:57.283616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:16.716 [2024-11-21 02:29:57.283709] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.985 [2024-11-21 02:29:57.426274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:16.985 [2024-11-21 02:29:57.560326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:16.986 [2024-11-21 02:29:57.561230] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.986 [2024-11-21 02:29:57.561395] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.986 [2024-11-21 02:29:57.561576] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.986 [2024-11-21 02:29:57.561843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.986 [2024-11-21 02:29:57.561856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.935 02:29:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:17.935 02:29:58 -- common/autotest_common.sh@862 -- # return 0 00:13:17.935 02:29:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:17.935 02:29:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 02:29:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.935 02:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 [2024-11-21 02:29:58.313799] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.935 02:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.935 02:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 02:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.935 02:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 [2024-11-21 02:29:58.330005] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.935 02:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:17.935 02:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 NULL1 00:13:17.935 02:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:17.935 02:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 Delay0 00:13:17.935 02:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.935 02:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.935 02:29:58 -- common/autotest_common.sh@10 -- # set +x 00:13:17.935 02:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@28 -- # perf_pid=70625 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:17.935 02:29:58 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:17.935 [2024-11-21 02:29:58.514471] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:19.835 02:30:00 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.835 02:30:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.835 02:30:00 -- common/autotest_common.sh@10 -- # set +x 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 starting I/O failed: -6 00:13:20.093 [2024-11-21 02:30:00.549252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17897d0 is same with the state(5) to be set 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Read completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.093 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 starting I/O failed: -6 00:13:20.094 starting I/O failed: -6 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Write completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 Read completed with error (sct=0, sc=8) 00:13:20.094 starting I/O failed: -6 00:13:20.094 [2024-11-21 02:30:00.558064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ae0000c00 is same with the state(5) to be set 00:13:21.027 [2024-11-21 02:30:01.529853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b5a0 is same with the state(5) to be set 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 [2024-11-21 02:30:01.550371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178a950 is same with the state(5) to be set 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 [2024-11-21 02:30:01.550657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1789a80 is same with the state(5) to be set 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 [2024-11-21 02:30:01.552819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ae000c600 is same with the state(5) to be set 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Write completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.027 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Write completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Write completed with error (sct=0, sc=8) 00:13:21.028 Write completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 Write completed with error (sct=0, sc=8) 00:13:21.028 Read completed with error (sct=0, sc=8) 00:13:21.028 [2024-11-21 02:30:01.553557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ae000bf20 is same with the state(5) to be set 00:13:21.028 [2024-11-21 02:30:01.555229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178b5a0 (9): Bad file descriptor 00:13:21.028 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:21.028 02:30:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.028 02:30:01 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:21.028 02:30:01 -- target/delete_subsystem.sh@35 -- # kill -0 70625 00:13:21.028 02:30:01 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:21.028 Initializing NVMe Controllers 00:13:21.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:21.028 Controller IO queue size 128, less than required. 00:13:21.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:21.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:21.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:21.028 Initialization complete. Launching workers. 00:13:21.028 ======================================================== 00:13:21.028 Latency(us) 00:13:21.028 Device Information : IOPS MiB/s Average min max 00:13:21.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.90 0.08 896636.14 556.81 1011567.09 00:13:21.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 184.31 0.09 908725.78 1760.02 1015443.14 00:13:21.028 ======================================================== 00:13:21.028 Total : 354.20 0.17 902926.82 556.81 1015443.14 00:13:21.028 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@35 -- # kill -0 70625 00:13:21.593 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70625) - No such process 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@45 -- # NOT wait 70625 00:13:21.593 02:30:02 -- common/autotest_common.sh@650 -- # local es=0 00:13:21.593 02:30:02 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 70625 00:13:21.593 02:30:02 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:21.593 02:30:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:21.593 02:30:02 -- common/autotest_common.sh@642 -- # type -t wait 00:13:21.593 02:30:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:21.593 02:30:02 -- common/autotest_common.sh@653 -- # wait 70625 00:13:21.593 02:30:02 -- common/autotest_common.sh@653 -- # es=1 00:13:21.593 02:30:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:21.593 02:30:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:21.593 02:30:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.593 02:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.593 02:30:02 -- common/autotest_common.sh@10 -- # set +x 00:13:21.593 02:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.593 02:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.593 02:30:02 -- common/autotest_common.sh@10 -- # set +x 00:13:21.593 [2024-11-21 02:30:02.083623] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.593 02:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.593 02:30:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.593 02:30:02 -- common/autotest_common.sh@10 -- # set +x 00:13:21.593 02:30:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@54 -- # perf_pid=70678 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:21.593 02:30:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:21.850 [2024-11-21 02:30:02.257007] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:22.107 02:30:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:22.107 02:30:02 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:22.107 02:30:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:22.672 02:30:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:22.672 02:30:03 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:22.672 02:30:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:23.239 02:30:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:23.239 02:30:03 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:23.239 02:30:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:23.497 02:30:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:23.497 02:30:04 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:23.497 02:30:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:24.065 02:30:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:24.065 02:30:04 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:24.065 02:30:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:24.634 02:30:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:24.634 02:30:05 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:24.634 02:30:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:24.892 Initializing NVMe Controllers 00:13:24.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:24.892 Controller IO queue size 128, less than required. 00:13:24.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:24.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:24.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:24.892 Initialization complete. Launching workers. 00:13:24.892 ======================================================== 00:13:24.892 Latency(us) 00:13:24.892 Device Information : IOPS MiB/s Average min max 00:13:24.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002818.00 1000122.98 1010774.64 00:13:24.892 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004462.06 1000106.00 1042150.73 00:13:24.892 ======================================================== 00:13:24.892 Total : 256.00 0.12 1003640.03 1000106.00 1042150.73 00:13:24.892 00:13:25.151 02:30:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:25.151 02:30:05 -- target/delete_subsystem.sh@57 -- # kill -0 70678 00:13:25.151 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70678) - No such process 00:13:25.151 02:30:05 -- target/delete_subsystem.sh@67 -- # wait 70678 00:13:25.151 02:30:05 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:25.151 02:30:05 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:25.151 02:30:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:25.151 02:30:05 -- nvmf/common.sh@116 -- # sync 00:13:25.151 02:30:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:25.151 02:30:05 -- nvmf/common.sh@119 -- # set +e 00:13:25.151 02:30:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:25.151 02:30:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:25.151 rmmod nvme_tcp 00:13:25.151 rmmod nvme_fabrics 00:13:25.152 rmmod nvme_keyring 00:13:25.152 02:30:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:25.152 02:30:05 -- nvmf/common.sh@123 -- # set -e 00:13:25.152 02:30:05 -- nvmf/common.sh@124 -- # return 0 00:13:25.152 02:30:05 -- nvmf/common.sh@477 -- # '[' -n 70574 ']' 00:13:25.152 02:30:05 -- nvmf/common.sh@478 -- # killprocess 70574 00:13:25.152 02:30:05 -- common/autotest_common.sh@936 -- # '[' -z 70574 ']' 00:13:25.152 02:30:05 -- common/autotest_common.sh@940 -- # kill -0 70574 00:13:25.152 02:30:05 -- common/autotest_common.sh@941 -- # uname 00:13:25.152 02:30:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.152 02:30:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70574 00:13:25.152 killing process with pid 70574 00:13:25.152 02:30:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:25.152 02:30:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:25.152 02:30:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70574' 00:13:25.152 02:30:05 -- common/autotest_common.sh@955 -- # kill 70574 00:13:25.152 02:30:05 -- common/autotest_common.sh@960 -- # wait 70574 00:13:25.410 02:30:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:25.410 02:30:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:25.410 02:30:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:25.410 02:30:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.410 02:30:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:25.410 02:30:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.410 02:30:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.410 02:30:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.670 02:30:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:25.670 00:13:25.670 real 0m9.460s 00:13:25.670 user 0m28.843s 00:13:25.670 sys 0m1.531s 00:13:25.670 02:30:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:25.670 02:30:06 -- common/autotest_common.sh@10 -- # set +x 00:13:25.670 ************************************ 00:13:25.670 END TEST nvmf_delete_subsystem 00:13:25.670 ************************************ 00:13:25.670 02:30:06 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:25.670 02:30:06 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:13:25.670 02:30:06 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:25.670 02:30:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:25.670 02:30:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.670 02:30:06 -- common/autotest_common.sh@10 -- # set +x 00:13:25.670 ************************************ 00:13:25.670 START TEST nvmf_vfio_user 00:13:25.670 ************************************ 00:13:25.670 02:30:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:25.670 * Looking for test storage... 00:13:25.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.670 02:30:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:25.670 02:30:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:25.670 02:30:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:25.670 02:30:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:25.670 02:30:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:25.670 02:30:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:25.670 02:30:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:25.670 02:30:06 -- scripts/common.sh@335 -- # IFS=.-: 00:13:25.670 02:30:06 -- scripts/common.sh@335 -- # read -ra ver1 00:13:25.670 02:30:06 -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.670 02:30:06 -- scripts/common.sh@336 -- # read -ra ver2 00:13:25.670 02:30:06 -- scripts/common.sh@337 -- # local 'op=<' 00:13:25.670 02:30:06 -- scripts/common.sh@339 -- # ver1_l=2 00:13:25.670 02:30:06 -- scripts/common.sh@340 -- # ver2_l=1 00:13:25.670 02:30:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:25.670 02:30:06 -- scripts/common.sh@343 -- # case "$op" in 00:13:25.670 02:30:06 -- scripts/common.sh@344 -- # : 1 00:13:25.670 02:30:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:25.670 02:30:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.670 02:30:06 -- scripts/common.sh@364 -- # decimal 1 00:13:25.670 02:30:06 -- scripts/common.sh@352 -- # local d=1 00:13:25.670 02:30:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.670 02:30:06 -- scripts/common.sh@354 -- # echo 1 00:13:25.670 02:30:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:25.670 02:30:06 -- scripts/common.sh@365 -- # decimal 2 00:13:25.670 02:30:06 -- scripts/common.sh@352 -- # local d=2 00:13:25.670 02:30:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.670 02:30:06 -- scripts/common.sh@354 -- # echo 2 00:13:25.670 02:30:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:25.670 02:30:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:25.670 02:30:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:25.670 02:30:06 -- scripts/common.sh@367 -- # return 0 00:13:25.670 02:30:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.670 02:30:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:25.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.670 --rc genhtml_branch_coverage=1 00:13:25.670 --rc genhtml_function_coverage=1 00:13:25.670 --rc genhtml_legend=1 00:13:25.670 --rc geninfo_all_blocks=1 00:13:25.670 --rc geninfo_unexecuted_blocks=1 00:13:25.670 00:13:25.670 ' 00:13:25.670 02:30:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:25.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.670 --rc genhtml_branch_coverage=1 00:13:25.670 --rc genhtml_function_coverage=1 00:13:25.670 --rc genhtml_legend=1 00:13:25.670 --rc geninfo_all_blocks=1 00:13:25.670 --rc geninfo_unexecuted_blocks=1 00:13:25.670 00:13:25.670 ' 00:13:25.670 02:30:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:25.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.670 --rc genhtml_branch_coverage=1 00:13:25.670 --rc genhtml_function_coverage=1 00:13:25.670 --rc genhtml_legend=1 00:13:25.670 --rc geninfo_all_blocks=1 00:13:25.670 --rc geninfo_unexecuted_blocks=1 00:13:25.670 00:13:25.670 ' 00:13:25.670 02:30:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:25.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.670 --rc genhtml_branch_coverage=1 00:13:25.670 --rc genhtml_function_coverage=1 00:13:25.670 --rc genhtml_legend=1 00:13:25.671 --rc geninfo_all_blocks=1 00:13:25.671 --rc geninfo_unexecuted_blocks=1 00:13:25.671 00:13:25.671 ' 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.671 02:30:06 -- nvmf/common.sh@7 -- # uname -s 00:13:25.671 02:30:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.671 02:30:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.671 02:30:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.671 02:30:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.671 02:30:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.671 02:30:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.671 02:30:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.671 02:30:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.671 02:30:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.671 02:30:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.671 02:30:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:13:25.671 02:30:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:13:25.671 02:30:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.671 02:30:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.671 02:30:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.671 02:30:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.671 02:30:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.671 02:30:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.671 02:30:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.671 02:30:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 02:30:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 02:30:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 02:30:06 -- paths/export.sh@5 -- # export PATH 00:13:25.671 02:30:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 02:30:06 -- nvmf/common.sh@46 -- # : 0 00:13:25.671 02:30:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.671 02:30:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.671 02:30:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.671 02:30:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.671 02:30:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.671 02:30:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.671 02:30:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.671 02:30:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:25.671 02:30:06 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=70809 00:13:25.930 Process pid: 70809 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 70809' 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:25.930 02:30:06 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 70809 00:13:25.930 02:30:06 -- common/autotest_common.sh@829 -- # '[' -z 70809 ']' 00:13:25.930 02:30:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.930 02:30:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:25.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.930 02:30:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.930 02:30:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:25.930 02:30:06 -- common/autotest_common.sh@10 -- # set +x 00:13:25.930 [2024-11-21 02:30:06.365229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:25.930 [2024-11-21 02:30:06.365379] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.930 [2024-11-21 02:30:06.505197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.189 [2024-11-21 02:30:06.615964] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:26.189 [2024-11-21 02:30:06.616177] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.189 [2024-11-21 02:30:06.616190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.189 [2024-11-21 02:30:06.616198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.189 [2024-11-21 02:30:06.616352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.189 [2024-11-21 02:30:06.616728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.189 [2024-11-21 02:30:06.617219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.189 [2024-11-21 02:30:06.617267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.756 02:30:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.756 02:30:07 -- common/autotest_common.sh@862 -- # return 0 00:13:26.756 02:30:07 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:27.694 02:30:08 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:27.953 02:30:08 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:27.953 02:30:08 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:27.953 02:30:08 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.953 02:30:08 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:27.953 02:30:08 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:28.520 Malloc1 00:13:28.520 02:30:08 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:28.777 02:30:09 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:29.034 02:30:09 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:29.292 02:30:09 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.292 02:30:09 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:29.292 02:30:09 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:29.550 Malloc2 00:13:29.550 02:30:10 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:29.808 02:30:10 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:30.067 02:30:10 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:30.327 02:30:10 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:30.327 02:30:10 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:30.327 02:30:10 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:30.327 02:30:10 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:30.327 02:30:10 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:30.327 02:30:10 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:30.327 [2024-11-21 02:30:10.766352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:30.327 [2024-11-21 02:30:10.766400] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70946 ] 00:13:30.327 [2024-11-21 02:30:10.904308] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:30.327 [2024-11-21 02:30:10.907846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:30.327 [2024-11-21 02:30:10.907895] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f96b88ed000 00:13:30.327 [2024-11-21 02:30:10.908846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.327 [2024-11-21 02:30:10.909833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.910843] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.911846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.912835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.913852] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.914851] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.915847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:30.328 [2024-11-21 02:30:10.916864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:30.328 [2024-11-21 02:30:10.916905] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f96b7fea000 00:13:30.328 [2024-11-21 02:30:10.918171] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:30.328 [2024-11-21 02:30:10.939044] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:30.328 [2024-11-21 02:30:10.939124] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:30.328 [2024-11-21 02:30:10.942051] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:30.328 [2024-11-21 02:30:10.942133] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:30.328 [2024-11-21 02:30:10.942228] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:30.328 [2024-11-21 02:30:10.942255] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:30.328 [2024-11-21 02:30:10.942261] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:30.328 [2024-11-21 02:30:10.943024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:30.328 [2024-11-21 02:30:10.943074] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:30.328 [2024-11-21 02:30:10.943100] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:30.328 [2024-11-21 02:30:10.944024] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:30.328 [2024-11-21 02:30:10.944047] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:30.328 [2024-11-21 02:30:10.944059] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:30.328 [2024-11-21 02:30:10.945035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:30.328 [2024-11-21 02:30:10.945061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:30.328 [2024-11-21 02:30:10.946040] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:30.328 [2024-11-21 02:30:10.946064] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:30.328 [2024-11-21 02:30:10.946071] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:30.328 [2024-11-21 02:30:10.946080] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:30.328 [2024-11-21 02:30:10.946201] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:30.328 [2024-11-21 02:30:10.946207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:30.328 [2024-11-21 02:30:10.946213] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:30.328 [2024-11-21 02:30:10.947054] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:30.328 [2024-11-21 02:30:10.948057] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:30.328 [2024-11-21 02:30:10.949059] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:30.328 [2024-11-21 02:30:10.950129] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:30.328 [2024-11-21 02:30:10.951070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:30.328 [2024-11-21 02:30:10.951109] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:30.328 [2024-11-21 02:30:10.951116] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:30.328 [2024-11-21 02:30:10.951152] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:30.328 [2024-11-21 02:30:10.951169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:30.328 [2024-11-21 02:30:10.951185] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:30.328 [2024-11-21 02:30:10.951191] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.328 [2024-11-21 02:30:10.951207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.328 [2024-11-21 02:30:10.951280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:30.328 [2024-11-21 02:30:10.951307] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:30.328 [2024-11-21 02:30:10.951313] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:30.328 [2024-11-21 02:30:10.951317] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:30.328 [2024-11-21 02:30:10.951322] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:30.328 [2024-11-21 02:30:10.951327] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:30.328 [2024-11-21 02:30:10.951332] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:30.328 [2024-11-21 02:30:10.951337] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:30.328 [2024-11-21 02:30:10.951366] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:30.328 [2024-11-21 02:30:10.951378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:30.328 [2024-11-21 02:30:10.951399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:30.328 [2024-11-21 02:30:10.951414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.329 [2024-11-21 02:30:10.951424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.329 [2024-11-21 02:30:10.951435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.329 [2024-11-21 02:30:10.951443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:30.329 [2024-11-21 02:30:10.951449] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951463] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951491] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:30.329 [2024-11-21 02:30:10.951497] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951505] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951607] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951616] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:30.329 [2024-11-21 02:30:10.951621] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:30.329 [2024-11-21 02:30:10.951628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951671] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:30.329 [2024-11-21 02:30:10.951682] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951699] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:30.329 [2024-11-21 02:30:10.951704] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.329 [2024-11-21 02:30:10.951711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951766] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951776] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951784] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:30.329 [2024-11-21 02:30:10.951789] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.329 [2024-11-21 02:30:10.951795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951818] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951862] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951873] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:30.329 [2024-11-21 02:30:10.951878] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:30.329 [2024-11-21 02:30:10.951883] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:30.329 [2024-11-21 02:30:10.951913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.951970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.951993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.952007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.952015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.952030] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:30.329 [2024-11-21 02:30:10.952035] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:30.329 [2024-11-21 02:30:10.952039] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:30.329 [2024-11-21 02:30:10.952043] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:30.329 [2024-11-21 02:30:10.952049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:30.329 [2024-11-21 02:30:10.952058] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:30.329 [2024-11-21 02:30:10.952069] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:30.329 [2024-11-21 02:30:10.952090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.952098] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:30.329 [2024-11-21 02:30:10.952103] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:30.329 [2024-11-21 02:30:10.952109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.952118] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:30.329 [2024-11-21 02:30:10.952122] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:30.329 [2024-11-21 02:30:10.952128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:30.329 [2024-11-21 02:30:10.952136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.952167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.952178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:30.329 [2024-11-21 02:30:10.952187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:30.329 ===================================================== 00:13:30.329 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:30.329 ===================================================== 00:13:30.329 Controller Capabilities/Features 00:13:30.329 ================================ 00:13:30.329 Vendor ID: 4e58 00:13:30.329 Subsystem Vendor ID: 4e58 00:13:30.329 Serial Number: SPDK1 00:13:30.329 Model Number: SPDK bdev Controller 00:13:30.329 Firmware Version: 24.01.1 00:13:30.329 Recommended Arb Burst: 6 00:13:30.329 IEEE OUI Identifier: 8d 6b 50 00:13:30.329 Multi-path I/O 00:13:30.329 May have multiple subsystem ports: Yes 00:13:30.329 May have multiple controllers: Yes 00:13:30.329 Associated with SR-IOV VF: No 00:13:30.329 Max Data Transfer Size: 131072 00:13:30.329 Max Number of Namespaces: 32 00:13:30.329 Max Number of I/O Queues: 127 00:13:30.329 NVMe Specification Version (VS): 1.3 00:13:30.329 NVMe Specification Version (Identify): 1.3 00:13:30.329 Maximum Queue Entries: 256 00:13:30.329 Contiguous Queues Required: Yes 00:13:30.329 Arbitration Mechanisms Supported 00:13:30.329 Weighted Round Robin: Not Supported 00:13:30.329 Vendor Specific: Not Supported 00:13:30.329 Reset Timeout: 15000 ms 00:13:30.329 Doorbell Stride: 4 bytes 00:13:30.329 NVM Subsystem Reset: Not Supported 00:13:30.329 Command Sets Supported 00:13:30.329 NVM Command Set: Supported 00:13:30.329 Boot Partition: Not Supported 00:13:30.330 Memory Page Size Minimum: 4096 bytes 00:13:30.330 Memory Page Size Maximum: 4096 bytes 00:13:30.330 Persistent Memory Region: Not Supported 00:13:30.330 Optional Asynchronous Events Supported 00:13:30.330 Namespace Attribute Notices: Supported 00:13:30.330 Firmware Activation Notices: Not Supported 00:13:30.330 ANA Change Notices: Not Supported 00:13:30.330 PLE Aggregate Log Change Notices: Not Supported 00:13:30.330 LBA Status Info Alert Notices: Not Supported 00:13:30.330 EGE Aggregate Log Change Notices: Not Supported 00:13:30.330 Normal NVM Subsystem Shutdown event: Not Supported 00:13:30.330 Zone Descriptor Change Notices: Not Supported 00:13:30.330 Discovery Log Change Notices: Not Supported 00:13:30.330 Controller Attributes 00:13:30.330 128-bit Host Identifier: Supported 00:13:30.330 Non-Operational Permissive Mode: Not Supported 00:13:30.330 NVM Sets: Not Supported 00:13:30.330 Read Recovery Levels: Not Supported 00:13:30.330 Endurance Groups: Not Supported 00:13:30.330 Predictable Latency Mode: Not Supported 00:13:30.330 Traffic Based Keep ALive: Not Supported 00:13:30.330 Namespace Granularity: Not Supported 00:13:30.330 SQ Associations: Not Supported 00:13:30.330 UUID List: Not Supported 00:13:30.330 Multi-Domain Subsystem: Not Supported 00:13:30.330 Fixed Capacity Management: Not Supported 00:13:30.330 Variable Capacity Management: Not Supported 00:13:30.330 Delete Endurance Group: Not Supported 00:13:30.330 Delete NVM Set: Not Supported 00:13:30.330 Extended LBA Formats Supported: Not Supported 00:13:30.330 Flexible Data Placement Supported: Not Supported 00:13:30.330 00:13:30.330 Controller Memory Buffer Support 00:13:30.330 ================================ 00:13:30.330 Supported: No 00:13:30.330 00:13:30.330 Persistent Memory Region Support 00:13:30.330 ================================ 00:13:30.330 Supported: No 00:13:30.330 00:13:30.330 Admin Command Set Attributes 00:13:30.330 ============================ 00:13:30.330 Security Send/Receive: Not Supported 00:13:30.330 Format NVM: Not Supported 00:13:30.330 Firmware Activate/Download: Not Supported 00:13:30.330 Namespace Management: Not Supported 00:13:30.330 Device Self-Test: Not Supported 00:13:30.330 Directives: Not Supported 00:13:30.330 NVMe-MI: Not Supported 00:13:30.330 Virtualization Management: Not Supported 00:13:30.330 Doorbell Buffer Config: Not Supported 00:13:30.330 Get LBA Status Capability: Not Supported 00:13:30.330 Command & Feature Lockdown Capability: Not Supported 00:13:30.330 Abort Command Limit: 4 00:13:30.330 Async Event Request Limit: 4 00:13:30.330 Number of Firmware Slots: N/A 00:13:30.330 Firmware Slot 1 Read-Only: N/A 00:13:30.330 Firmware Activation Without Reset: N/A 00:13:30.330 Multiple Update Detection Support: N/A 00:13:30.330 Firmware Update Granularity: No Information Provided 00:13:30.330 Per-Namespace SMART Log: No 00:13:30.330 Asymmetric Namespace Access Log Page: Not Supported 00:13:30.330 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:30.330 Command Effects Log Page: Supported 00:13:30.330 Get Log Page Extended Data: Supported 00:13:30.330 Telemetry Log Pages: Not Supported 00:13:30.330 Persistent Event Log Pages: Not Supported 00:13:30.330 Supported Log Pages Log Page: May Support 00:13:30.330 Commands Supported & Effects Log Page: Not Supported 00:13:30.330 Feature Identifiers & Effects Log Page:May Support 00:13:30.330 NVMe-MI Commands & Effects Log Page: May Support 00:13:30.330 Data Area 4 for Telemetry Log: Not Supported 00:13:30.330 Error Log Page Entries Supported: 128 00:13:30.330 Keep Alive: Supported 00:13:30.330 Keep Alive Granularity: 10000 ms 00:13:30.330 00:13:30.330 NVM Command Set Attributes 00:13:30.330 ========================== 00:13:30.330 Submission Queue Entry Size 00:13:30.330 Max: 64 00:13:30.330 Min: 64 00:13:30.330 Completion Queue Entry Size 00:13:30.330 Max: 16 00:13:30.330 Min: 16 00:13:30.330 Number of Namespaces: 32 00:13:30.330 Compare Command: Supported 00:13:30.330 Write Uncorrectable Command: Not Supported 00:13:30.330 Dataset Management Command: Supported 00:13:30.330 Write Zeroes Command: Supported 00:13:30.330 Set Features Save Field: Not Supported 00:13:30.330 Reservations: Not Supported 00:13:30.330 Timestamp: Not Supported 00:13:30.330 Copy: Supported 00:13:30.330 Volatile Write Cache: Present 00:13:30.330 Atomic Write Unit (Normal): 1 00:13:30.330 Atomic Write Unit (PFail): 1 00:13:30.330 Atomic Compare & Write Unit: 1 00:13:30.330 Fused Compare & Write: Supported 00:13:30.330 Scatter-Gather List 00:13:30.330 SGL Command Set: Supported (Dword aligned) 00:13:30.330 SGL Keyed: Not Supported 00:13:30.330 SGL Bit Bucket Descriptor: Not Supported 00:13:30.330 SGL Metadata Pointer: Not Supported 00:13:30.330 Oversized SGL: Not Supported 00:13:30.330 SGL Metadata Address: Not Supported 00:13:30.330 SGL Offset: Not Supported 00:13:30.330 Transport SGL Data Block: Not Supported 00:13:30.330 Replay Protected Memory Block: Not Supported 00:13:30.330 00:13:30.330 Firmware Slot Information 00:13:30.330 ========================= 00:13:30.330 Active slot: 1 00:13:30.330 Slot 1 Firmware Revision: 24.01.1 00:13:30.330 00:13:30.330 00:13:30.330 Commands Supported and Effects 00:13:30.330 ============================== 00:13:30.330 Admin Commands 00:13:30.330 -------------- 00:13:30.330 Get Log Page (02h): Supported 00:13:30.330 Identify (06h): Supported 00:13:30.330 Abort (08h): Supported 00:13:30.330 Set Features (09h): Supported 00:13:30.330 Get Features (0Ah): Supported 00:13:30.330 Asynchronous Event Request (0Ch): Supported 00:13:30.330 Keep Alive (18h): Supported 00:13:30.330 I/O Commands 00:13:30.330 ------------ 00:13:30.330 Flush (00h): Supported LBA-Change 00:13:30.330 Write (01h): Supported LBA-Change 00:13:30.330 Read (02h): Supported 00:13:30.330 Compare (05h): Supported 00:13:30.330 Write Zeroes (08h): Supported LBA-Change 00:13:30.330 Dataset Management (09h): Supported LBA-Change 00:13:30.330 Copy (19h): Supported LBA-Change 00:13:30.330 Unknown (79h): Supported LBA-Change 00:13:30.330 Unknown (7Ah): Supported 00:13:30.330 00:13:30.330 Error Log 00:13:30.330 ========= 00:13:30.330 00:13:30.330 Arbitration 00:13:30.330 =========== 00:13:30.330 Arbitration Burst: 1 00:13:30.330 00:13:30.330 Power Management 00:13:30.330 ================ 00:13:30.330 Number of Power States: 1 00:13:30.330 Current Power State: Power State #0 00:13:30.330 Power State #0: 00:13:30.330 Max Power: 0.00 W 00:13:30.330 Non-Operational State: Operational 00:13:30.330 Entry Latency: Not Reported 00:13:30.330 Exit Latency: Not Reported 00:13:30.330 Relative Read Throughput: 0 00:13:30.330 Relative Read Latency: 0 00:13:30.330 Relative Write Throughput: 0 00:13:30.330 Relative Write Latency: 0 00:13:30.330 Idle Power: Not Reported 00:13:30.330 Active Power: Not Reported 00:13:30.330 Non-Operational Permissive Mode: Not Supported 00:13:30.330 00:13:30.330 Health Information 00:13:30.330 ================== 00:13:30.330 Critical Warnings: 00:13:30.330 Available Spare Space: OK 00:13:30.330 Temperature: OK 00:13:30.330 Device Reliability: OK 00:13:30.330 Read Only: No 00:13:30.330 Volatile Memory Backup: OK 00:13:30.330 Current Temperature: 0 Kelvin[2024-11-21 02:30:10.952343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:30.330 [2024-11-21 02:30:10.952355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:30.330 [2024-11-21 02:30:10.952410] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:30.330 [2024-11-21 02:30:10.952423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.330 [2024-11-21 02:30:10.952431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.330 [2024-11-21 02:30:10.952438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.330 [2024-11-21 02:30:10.952445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:30.330 [2024-11-21 02:30:10.953084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:30.330 [2024-11-21 02:30:10.953126] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:30.330 [2024-11-21 02:30:10.954153] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:30.330 [2024-11-21 02:30:10.954170] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:30.330 [2024-11-21 02:30:10.955088] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:30.330 [2024-11-21 02:30:10.955132] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:30.330 [2024-11-21 02:30:10.955293] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:30.331 [2024-11-21 02:30:10.959940] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:30.590 (-273 Celsius) 00:13:30.590 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:30.590 Available Spare: 0% 00:13:30.590 Available Spare Threshold: 0% 00:13:30.590 Life Percentage Used: 0% 00:13:30.590 Data Units Read: 0 00:13:30.590 Data Units Written: 0 00:13:30.590 Host Read Commands: 0 00:13:30.590 Host Write Commands: 0 00:13:30.590 Controller Busy Time: 0 minutes 00:13:30.590 Power Cycles: 0 00:13:30.590 Power On Hours: 0 hours 00:13:30.590 Unsafe Shutdowns: 0 00:13:30.590 Unrecoverable Media Errors: 0 00:13:30.590 Lifetime Error Log Entries: 0 00:13:30.590 Warning Temperature Time: 0 minutes 00:13:30.590 Critical Temperature Time: 0 minutes 00:13:30.590 00:13:30.590 Number of Queues 00:13:30.590 ================ 00:13:30.590 Number of I/O Submission Queues: 127 00:13:30.590 Number of I/O Completion Queues: 127 00:13:30.590 00:13:30.590 Active Namespaces 00:13:30.590 ================= 00:13:30.590 Namespace ID:1 00:13:30.590 Error Recovery Timeout: Unlimited 00:13:30.590 Command Set Identifier: NVM (00h) 00:13:30.590 Deallocate: Supported 00:13:30.590 Deallocated/Unwritten Error: Not Supported 00:13:30.590 Deallocated Read Value: Unknown 00:13:30.590 Deallocate in Write Zeroes: Not Supported 00:13:30.590 Deallocated Guard Field: 0xFFFF 00:13:30.590 Flush: Supported 00:13:30.590 Reservation: Supported 00:13:30.590 Namespace Sharing Capabilities: Multiple Controllers 00:13:30.590 Size (in LBAs): 131072 (0GiB) 00:13:30.590 Capacity (in LBAs): 131072 (0GiB) 00:13:30.590 Utilization (in LBAs): 131072 (0GiB) 00:13:30.590 NGUID: BC2CD621543C45DF9F8D3A72F4694E05 00:13:30.590 UUID: bc2cd621-543c-45df-9f8d-3a72f4694e05 00:13:30.590 Thin Provisioning: Not Supported 00:13:30.590 Per-NS Atomic Units: Yes 00:13:30.590 Atomic Boundary Size (Normal): 0 00:13:30.590 Atomic Boundary Size (PFail): 0 00:13:30.590 Atomic Boundary Offset: 0 00:13:30.590 Maximum Single Source Range Length: 65535 00:13:30.590 Maximum Copy Length: 65535 00:13:30.590 Maximum Source Range Count: 1 00:13:30.590 NGUID/EUI64 Never Reused: No 00:13:30.590 Namespace Write Protected: No 00:13:30.590 Number of LBA Formats: 1 00:13:30.590 Current LBA Format: LBA Format #00 00:13:30.590 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:30.590 00:13:30.590 02:30:11 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:35.867 Initializing NVMe Controllers 00:13:35.867 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:35.867 Initialization complete. Launching workers. 00:13:35.867 ======================================================== 00:13:35.867 Latency(us) 00:13:35.867 Device Information : IOPS MiB/s Average min max 00:13:35.867 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 38882.52 151.88 3291.77 1058.51 10140.75 00:13:35.867 ======================================================== 00:13:35.867 Total : 38882.52 151.88 3291.77 1058.51 10140.75 00:13:35.867 00:13:35.867 02:30:16 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:41.130 Initializing NVMe Controllers 00:13:41.130 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:41.130 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:41.130 Initialization complete. Launching workers. 00:13:41.130 ======================================================== 00:13:41.130 Latency(us) 00:13:41.130 Device Information : IOPS MiB/s Average min max 00:13:41.130 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15938.13 62.26 8034.84 4881.42 16055.28 00:13:41.130 ======================================================== 00:13:41.130 Total : 15938.13 62.26 8034.84 4881.42 16055.28 00:13:41.130 00:13:41.130 02:30:21 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:46.395 Initializing NVMe Controllers 00:13:46.395 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:46.395 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:46.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:46.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:46.395 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:46.395 Initialization complete. Launching workers. 00:13:46.395 Starting thread on core 2 00:13:46.395 Starting thread on core 3 00:13:46.395 Starting thread on core 1 00:13:46.395 02:30:27 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:50.580 Initializing NVMe Controllers 00:13:50.580 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.580 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.580 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:50.580 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:50.580 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:50.580 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:50.580 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:50.580 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:50.580 Initialization complete. Launching workers. 00:13:50.580 Starting thread on core 1 with urgent priority queue 00:13:50.580 Starting thread on core 2 with urgent priority queue 00:13:50.580 Starting thread on core 3 with urgent priority queue 00:13:50.580 Starting thread on core 0 with urgent priority queue 00:13:50.580 SPDK bdev Controller (SPDK1 ) core 0: 3192.67 IO/s 31.32 secs/100000 ios 00:13:50.580 SPDK bdev Controller (SPDK1 ) core 1: 3622.00 IO/s 27.61 secs/100000 ios 00:13:50.580 SPDK bdev Controller (SPDK1 ) core 2: 3087.67 IO/s 32.39 secs/100000 ios 00:13:50.580 SPDK bdev Controller (SPDK1 ) core 3: 3263.33 IO/s 30.64 secs/100000 ios 00:13:50.580 ======================================================== 00:13:50.580 00:13:50.580 02:30:30 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:50.580 Initializing NVMe Controllers 00:13:50.580 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.580 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.580 Namespace ID: 1 size: 0GB 00:13:50.580 Initialization complete. 00:13:50.580 INFO: using host memory buffer for IO 00:13:50.580 Hello world! 00:13:50.580 02:30:30 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:51.987 Initializing NVMe Controllers 00:13:51.987 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.987 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.987 Initialization complete. Launching workers. 00:13:51.987 submit (in ns) avg, min, max = 8641.4, 3714.5, 4044966.4 00:13:51.987 complete (in ns) avg, min, max = 35253.1, 2175.5, 8022258.2 00:13:51.987 00:13:51.987 Submit histogram 00:13:51.987 ================ 00:13:51.987 Range in us Cumulative Count 00:13:51.987 3.709 - 3.724: 0.0103% ( 1) 00:13:51.987 3.724 - 3.753: 0.5645% ( 54) 00:13:51.987 3.753 - 3.782: 2.3709% ( 176) 00:13:51.987 3.782 - 3.811: 5.2858% ( 284) 00:13:51.987 3.811 - 3.840: 7.6773% ( 233) 00:13:51.987 3.840 - 3.869: 11.0541% ( 329) 00:13:51.987 3.869 - 3.898: 20.0041% ( 872) 00:13:51.987 3.898 - 3.927: 33.4291% ( 1308) 00:13:51.987 3.927 - 3.956: 43.4671% ( 978) 00:13:51.987 3.956 - 3.985: 54.1825% ( 1044) 00:13:51.987 3.985 - 4.015: 65.2058% ( 1074) 00:13:51.987 4.015 - 4.044: 72.4725% ( 708) 00:13:51.987 4.044 - 4.073: 76.5781% ( 400) 00:13:51.987 4.073 - 4.102: 79.7496% ( 309) 00:13:51.987 4.102 - 4.131: 81.9050% ( 210) 00:13:51.987 4.131 - 4.160: 83.5472% ( 160) 00:13:51.987 4.160 - 4.189: 85.0354% ( 145) 00:13:51.987 4.189 - 4.218: 86.3492% ( 128) 00:13:51.987 4.218 - 4.247: 87.5706% ( 119) 00:13:51.987 4.247 - 4.276: 89.0178% ( 141) 00:13:51.987 4.276 - 4.305: 90.9371% ( 187) 00:13:51.987 4.305 - 4.335: 92.7846% ( 180) 00:13:51.987 4.335 - 4.364: 94.5191% ( 169) 00:13:51.987 4.364 - 4.393: 95.4634% ( 92) 00:13:51.987 4.393 - 4.422: 96.4282% ( 94) 00:13:51.987 4.422 - 4.451: 96.7874% ( 35) 00:13:51.987 4.451 - 4.480: 97.0954% ( 30) 00:13:51.987 4.480 - 4.509: 97.3930% ( 29) 00:13:51.987 4.509 - 4.538: 97.5264% ( 13) 00:13:51.987 4.538 - 4.567: 97.6599% ( 13) 00:13:51.987 4.567 - 4.596: 97.6804% ( 2) 00:13:51.987 4.596 - 4.625: 97.7317% ( 5) 00:13:51.987 4.625 - 4.655: 97.7625% ( 3) 00:13:51.987 4.655 - 4.684: 97.7728% ( 1) 00:13:51.987 4.684 - 4.713: 97.7933% ( 2) 00:13:51.987 4.713 - 4.742: 97.8036% ( 1) 00:13:51.987 4.742 - 4.771: 97.8651% ( 6) 00:13:51.987 4.771 - 4.800: 97.8959% ( 3) 00:13:51.987 4.800 - 4.829: 97.9780% ( 8) 00:13:51.987 4.829 - 4.858: 98.0499% ( 7) 00:13:51.987 4.858 - 4.887: 98.1115% ( 6) 00:13:51.987 4.887 - 4.916: 98.1730% ( 6) 00:13:51.987 4.916 - 4.945: 98.2552% ( 8) 00:13:51.987 4.945 - 4.975: 98.3373% ( 8) 00:13:51.987 4.975 - 5.004: 98.4194% ( 8) 00:13:51.987 5.004 - 5.033: 98.5015% ( 8) 00:13:51.987 5.033 - 5.062: 98.5836% ( 8) 00:13:51.987 5.062 - 5.091: 98.6760% ( 9) 00:13:51.987 5.091 - 5.120: 98.7068% ( 3) 00:13:51.987 5.120 - 5.149: 98.7273% ( 2) 00:13:51.987 5.149 - 5.178: 98.7991% ( 7) 00:13:51.987 5.178 - 5.207: 98.8299% ( 3) 00:13:51.987 5.236 - 5.265: 98.8505% ( 2) 00:13:51.987 5.265 - 5.295: 98.8812% ( 3) 00:13:51.987 5.295 - 5.324: 98.8915% ( 1) 00:13:51.987 5.353 - 5.382: 98.9223% ( 3) 00:13:51.987 5.382 - 5.411: 98.9326% ( 1) 00:13:51.987 5.411 - 5.440: 98.9428% ( 1) 00:13:51.987 5.585 - 5.615: 98.9531% ( 1) 00:13:51.987 5.876 - 5.905: 98.9634% ( 1) 00:13:51.987 8.611 - 8.669: 98.9736% ( 1) 00:13:51.987 8.669 - 8.727: 98.9839% ( 1) 00:13:51.987 8.844 - 8.902: 98.9941% ( 1) 00:13:51.987 8.960 - 9.018: 99.0044% ( 1) 00:13:51.987 9.135 - 9.193: 99.0147% ( 1) 00:13:51.987 9.309 - 9.367: 99.0352% ( 2) 00:13:51.987 9.425 - 9.484: 99.0455% ( 1) 00:13:51.987 9.542 - 9.600: 99.0557% ( 1) 00:13:51.987 9.775 - 9.833: 99.0865% ( 3) 00:13:51.987 9.833 - 9.891: 99.0968% ( 1) 00:13:51.987 9.891 - 9.949: 99.1071% ( 1) 00:13:51.987 9.949 - 10.007: 99.1173% ( 1) 00:13:51.987 10.007 - 10.065: 99.1481% ( 3) 00:13:51.987 10.065 - 10.124: 99.1686% ( 2) 00:13:51.987 10.240 - 10.298: 99.1789% ( 1) 00:13:51.987 10.356 - 10.415: 99.1994% ( 2) 00:13:51.987 10.647 - 10.705: 99.2200% ( 2) 00:13:51.987 10.764 - 10.822: 99.2302% ( 1) 00:13:51.987 10.880 - 10.938: 99.2405% ( 1) 00:13:51.987 10.938 - 10.996: 99.2507% ( 1) 00:13:51.987 11.113 - 11.171: 99.2713% ( 2) 00:13:51.987 11.171 - 11.229: 99.2918% ( 2) 00:13:51.987 11.345 - 11.404: 99.3021% ( 1) 00:13:51.987 11.404 - 11.462: 99.3226% ( 2) 00:13:51.987 11.753 - 11.811: 99.3329% ( 1) 00:13:51.987 11.927 - 11.985: 99.3431% ( 1) 00:13:51.987 12.393 - 12.451: 99.3534% ( 1) 00:13:51.987 12.800 - 12.858: 99.3636% ( 1) 00:13:51.987 13.033 - 13.091: 99.3739% ( 1) 00:13:51.987 13.207 - 13.265: 99.3842% ( 1) 00:13:51.987 13.382 - 13.440: 99.3944% ( 1) 00:13:51.987 13.615 - 13.673: 99.4047% ( 1) 00:13:51.987 13.789 - 13.847: 99.4150% ( 1) 00:13:51.987 13.847 - 13.905: 99.4252% ( 1) 00:13:51.987 13.964 - 14.022: 99.4355% ( 1) 00:13:51.987 14.138 - 14.196: 99.4458% ( 1) 00:13:51.987 14.487 - 14.545: 99.4560% ( 1) 00:13:51.987 14.662 - 14.720: 99.4765% ( 2) 00:13:51.987 14.895 - 15.011: 99.4868% ( 1) 00:13:51.987 15.011 - 15.127: 99.5073% ( 2) 00:13:51.987 15.127 - 15.244: 99.5587% ( 5) 00:13:51.987 15.476 - 15.593: 99.5792% ( 2) 00:13:51.987 15.593 - 15.709: 99.5997% ( 2) 00:13:51.987 15.709 - 15.825: 99.6100% ( 1) 00:13:51.987 15.942 - 16.058: 99.6202% ( 1) 00:13:51.987 16.175 - 16.291: 99.6305% ( 1) 00:13:51.987 16.407 - 16.524: 99.6510% ( 2) 00:13:51.987 16.524 - 16.640: 99.6613% ( 1) 00:13:51.987 16.756 - 16.873: 99.6716% ( 1) 00:13:51.987 16.989 - 17.105: 99.6818% ( 1) 00:13:51.987 17.222 - 17.338: 99.6921% ( 1) 00:13:51.987 18.967 - 19.084: 99.7126% ( 2) 00:13:51.987 19.200 - 19.316: 99.7331% ( 2) 00:13:51.987 19.316 - 19.433: 99.7639% ( 3) 00:13:51.987 19.665 - 19.782: 99.7845% ( 2) 00:13:51.987 19.782 - 19.898: 99.7947% ( 1) 00:13:51.987 20.131 - 20.247: 99.8153% ( 2) 00:13:51.987 20.480 - 20.596: 99.8358% ( 2) 00:13:51.987 20.596 - 20.713: 99.8563% ( 2) 00:13:51.987 21.644 - 21.760: 99.8666% ( 1) 00:13:51.987 22.109 - 22.225: 99.8768% ( 1) 00:13:51.987 23.040 - 23.156: 99.8871% ( 1) 00:13:51.987 3961.949 - 3991.738: 99.8974% ( 1) 00:13:51.987 3991.738 - 4021.527: 99.9589% ( 6) 00:13:51.987 4021.527 - 4051.316: 100.0000% ( 4) 00:13:51.987 00:13:51.987 Complete histogram 00:13:51.987 ================== 00:13:51.987 Range in us Cumulative Count 00:13:51.987 2.167 - 2.182: 0.5132% ( 50) 00:13:51.987 2.182 - 2.196: 5.9119% ( 526) 00:13:51.987 2.196 - 2.211: 38.9716% ( 3221) 00:13:51.987 2.211 - 2.225: 59.6531% ( 2015) 00:13:51.987 2.225 - 2.240: 75.3053% ( 1525) 00:13:51.987 2.240 - 2.255: 77.9945% ( 262) 00:13:51.987 2.255 - 2.269: 80.6425% ( 258) 00:13:51.987 2.269 - 2.284: 85.1586% ( 440) 00:13:51.987 2.284 - 2.298: 88.6688% ( 342) 00:13:51.987 2.298 - 2.313: 90.8858% ( 216) 00:13:51.987 2.313 - 2.327: 91.9737% ( 106) 00:13:51.987 2.327 - 2.342: 92.7538% ( 76) 00:13:51.987 2.342 - 2.356: 93.6262% ( 85) 00:13:51.987 2.356 - 2.371: 94.3241% ( 68) 00:13:51.987 2.371 - 2.385: 94.5910% ( 26) 00:13:51.987 2.385 - 2.400: 94.9707% ( 37) 00:13:51.987 2.400 - 2.415: 95.2273% ( 25) 00:13:51.987 2.415 - 2.429: 95.5250% ( 29) 00:13:51.987 2.429 - 2.444: 95.8534% ( 32) 00:13:51.987 2.444 - 2.458: 96.0587% ( 20) 00:13:51.987 2.458 - 2.473: 96.3769% ( 31) 00:13:51.987 2.473 - 2.487: 96.7156% ( 33) 00:13:51.987 2.487 - 2.502: 96.9106% ( 19) 00:13:51.987 2.502 - 2.516: 97.1672% ( 25) 00:13:51.987 2.516 - 2.531: 97.3622% ( 19) 00:13:51.987 2.531 - 2.545: 97.5264% ( 16) 00:13:51.987 2.545 - 2.560: 97.7625% ( 23) 00:13:51.987 2.560 - 2.575: 97.8959% ( 13) 00:13:51.987 2.575 - 2.589: 98.0191% ( 12) 00:13:51.987 2.589 - 2.604: 98.1217% ( 10) 00:13:51.987 2.604 - 2.618: 98.1525% ( 3) 00:13:51.987 2.618 - 2.633: 98.2244% ( 7) 00:13:51.987 2.633 - 2.647: 98.2757% ( 5) 00:13:51.987 2.647 - 2.662: 98.3065% ( 3) 00:13:51.987 2.662 - 2.676: 98.3270% ( 2) 00:13:51.987 2.735 - 2.749: 98.3373% ( 1) 00:13:51.987 3.505 - 3.520: 98.3578% ( 2) 00:13:51.987 3.520 - 3.535: 98.3681% ( 1) 00:13:51.988 3.535 - 3.549: 98.3783% ( 1) 00:13:51.988 3.549 - 3.564: 98.3989% ( 2) 00:13:51.988 3.578 - 3.593: 98.4194% ( 2) 00:13:51.988 3.607 - 3.622: 98.4296% ( 1) 00:13:51.988 3.622 - 3.636: 98.4502% ( 2) 00:13:51.988 3.636 - 3.651: 98.4604% ( 1) 00:13:51.988 3.665 - 3.680: 98.4707% ( 1) 00:13:51.988 3.680 - 3.695: 98.4810% ( 1) 00:13:51.988 3.709 - 3.724: 98.5015% ( 2) 00:13:51.988 3.724 - 3.753: 98.5323% ( 3) 00:13:51.988 3.753 - 3.782: 98.5425% ( 1) 00:13:51.988 3.811 - 3.840: 98.5528% ( 1) 00:13:51.988 3.840 - 3.869: 98.5631% ( 1) 00:13:51.988 3.869 - 3.898: 98.5733% ( 1) 00:13:51.988 3.927 - 3.956: 98.5836% ( 1) 00:13:51.988 3.985 - 4.015: 98.5939% ( 1) 00:13:51.988 4.015 - 4.044: 98.6144% ( 2) 00:13:51.988 4.131 - 4.160: 98.6247% ( 1) 00:13:51.988 4.189 - 4.218: 98.6349% ( 1) 00:13:51.988 4.305 - 4.335: 98.6452% ( 1) 00:13:51.988 4.596 - 4.625: 98.6554% ( 1) 00:13:51.988 7.098 - 7.127: 98.6657% ( 1) 00:13:51.988 7.127 - 7.156: 98.6760% ( 1) 00:13:51.988 7.505 - 7.564: 98.6862% ( 1) 00:13:51.988 7.564 - 7.622: 98.6965% ( 1) 00:13:51.988 7.680 - 7.738: 98.7068% ( 1) 00:13:51.988 7.738 - 7.796: 98.7376% ( 3) 00:13:51.988 7.796 - 7.855: 98.7478% ( 1) 00:13:51.988 7.855 - 7.913: 98.7581% ( 1) 00:13:51.988 7.913 - 7.971: 98.7683% ( 1) 00:13:51.988 8.029 - 8.087: 98.7786% ( 1) 00:13:51.988 8.204 - 8.262: 98.7889% ( 1) 00:13:51.988 8.378 - 8.436: 98.7991% ( 1) 00:13:51.988 8.436 - 8.495: 98.8197% ( 2) 00:13:51.988 8.495 - 8.553: 98.8402% ( 2) 00:13:51.988 8.669 - 8.727: 98.8607% ( 2) 00:13:51.988 8.727 - 8.785: 98.8710% ( 1) 00:13:51.988 8.785 - 8.844: 98.8812% ( 1) 00:13:51.988 9.135 - 9.193: 98.9018% ( 2) 00:13:51.988 9.193 - 9.251: 98.9120% ( 1) 00:13:51.988 9.251 - 9.309: 98.9223% ( 1) 00:13:51.988 9.367 - 9.425: 98.9326% ( 1) 00:13:51.988 9.833 - 9.891: 98.9428% ( 1) 00:13:51.988 10.182 - 10.240: 98.9531% ( 1) 00:13:51.988 10.531 - 10.589: 98.9634% ( 1) 00:13:51.988 12.102 - 12.160: 98.9736% ( 1) 00:13:51.988 13.091 - 13.149: 98.9839% ( 1) 00:13:51.988 13.905 - 13.964: 98.9941% ( 1) 00:13:51.988 14.196 - 14.255: 99.0044% ( 1) 00:13:51.988 14.836 - 14.895: 99.0147% ( 1) 00:13:51.988 16.989 - 17.105: 99.0352% ( 2) 00:13:51.988 17.105 - 17.222: 99.0557% ( 2) 00:13:51.988 17.222 - 17.338: 99.0660% ( 1) 00:13:51.988 17.338 - 17.455: 99.0763% ( 1) 00:13:51.988 17.455 - 17.571: 99.0968% ( 2) 00:13:51.988 17.571 - 17.687: 99.1071% ( 1) 00:13:51.988 17.687 - 17.804: 99.1173% ( 1) 00:13:51.988 17.804 - 17.920: 99.1276% ( 1) 00:13:51.988 18.036 - 18.153: 99.1584% ( 3) 00:13:51.988 18.502 - 18.618: 99.1686% ( 1) 00:13:51.988 19.898 - 20.015: 99.1789% ( 1) 00:13:51.988 22.575 - 22.691: 99.1892% ( 1) 00:13:51.988 26.880 - 26.996: 99.1994% ( 1) 00:13:51.988 31.884 - 32.116: 99.2097% ( 1) 00:13:51.988 3038.487 - 3053.382: 99.2200% ( 1) 00:13:51.988 3053.382 - 3068.276: 99.2302% ( 1) 00:13:51.988 3932.160 - 3961.949: 99.2507% ( 2) 00:13:51.988 3961.949 - 3991.738: 99.2815% ( 3) 00:13:51.988 3991.738 - 4021.527: 99.7434% ( 45) 00:13:51.988 4021.527 - 4051.316: 99.9589% ( 21) 00:13:51.988 7000.436 - 7030.225: 99.9795% ( 2) 00:13:51.988 7923.898 - 7983.476: 99.9897% ( 1) 00:13:51.988 7983.476 - 8043.055: 100.0000% ( 1) 00:13:51.988 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:51.988 [2024-11-21 02:30:32.566832] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:51.988 [ 00:13:51.988 { 00:13:51.988 "allow_any_host": true, 00:13:51.988 "hosts": [], 00:13:51.988 "listen_addresses": [], 00:13:51.988 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:51.988 "subtype": "Discovery" 00:13:51.988 }, 00:13:51.988 { 00:13:51.988 "allow_any_host": true, 00:13:51.988 "hosts": [], 00:13:51.988 "listen_addresses": [ 00:13:51.988 { 00:13:51.988 "adrfam": "IPv4", 00:13:51.988 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:51.988 "transport": "VFIOUSER", 00:13:51.988 "trsvcid": "0", 00:13:51.988 "trtype": "VFIOUSER" 00:13:51.988 } 00:13:51.988 ], 00:13:51.988 "max_cntlid": 65519, 00:13:51.988 "max_namespaces": 32, 00:13:51.988 "min_cntlid": 1, 00:13:51.988 "model_number": "SPDK bdev Controller", 00:13:51.988 "namespaces": [ 00:13:51.988 { 00:13:51.988 "bdev_name": "Malloc1", 00:13:51.988 "name": "Malloc1", 00:13:51.988 "nguid": "BC2CD621543C45DF9F8D3A72F4694E05", 00:13:51.988 "nsid": 1, 00:13:51.988 "uuid": "bc2cd621-543c-45df-9f8d-3a72f4694e05" 00:13:51.988 } 00:13:51.988 ], 00:13:51.988 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:51.988 "serial_number": "SPDK1", 00:13:51.988 "subtype": "NVMe" 00:13:51.988 }, 00:13:51.988 { 00:13:51.988 "allow_any_host": true, 00:13:51.988 "hosts": [], 00:13:51.988 "listen_addresses": [ 00:13:51.988 { 00:13:51.988 "adrfam": "IPv4", 00:13:51.988 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:51.988 "transport": "VFIOUSER", 00:13:51.988 "trsvcid": "0", 00:13:51.988 "trtype": "VFIOUSER" 00:13:51.988 } 00:13:51.988 ], 00:13:51.988 "max_cntlid": 65519, 00:13:51.988 "max_namespaces": 32, 00:13:51.988 "min_cntlid": 1, 00:13:51.988 "model_number": "SPDK bdev Controller", 00:13:51.988 "namespaces": [ 00:13:51.988 { 00:13:51.988 "bdev_name": "Malloc2", 00:13:51.988 "name": "Malloc2", 00:13:51.988 "nguid": "F9EC39E8D8524600995AD69AA6A74E4B", 00:13:51.988 "nsid": 1, 00:13:51.988 "uuid": "f9ec39e8-d852-4600-995a-d69aa6a74e4b" 00:13:51.988 } 00:13:51.988 ], 00:13:51.988 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:51.988 "serial_number": "SPDK2", 00:13:51.988 "subtype": "NVMe" 00:13:51.988 } 00:13:51.988 ] 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71204 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:51.988 02:30:32 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:51.988 02:30:32 -- common/autotest_common.sh@1254 -- # local i=0 00:13:51.988 02:30:32 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:51.988 02:30:32 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:13:51.988 02:30:32 -- common/autotest_common.sh@1257 -- # i=1 00:13:51.988 02:30:32 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:52.247 02:30:32 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:52.247 02:30:32 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:13:52.247 02:30:32 -- common/autotest_common.sh@1257 -- # i=2 00:13:52.247 02:30:32 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:13:52.247 02:30:32 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:52.247 02:30:32 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:52.247 02:30:32 -- common/autotest_common.sh@1265 -- # return 0 00:13:52.247 02:30:32 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:52.247 02:30:32 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:52.507 Malloc3 00:13:52.766 02:30:33 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:52.766 02:30:33 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:53.024 Asynchronous Event Request test 00:13:53.024 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.024 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.024 Registering asynchronous event callbacks... 00:13:53.024 Starting namespace attribute notice tests for all controllers... 00:13:53.024 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:53.024 aer_cb - Changed Namespace 00:13:53.024 Cleaning up... 00:13:53.024 [ 00:13:53.024 { 00:13:53.024 "allow_any_host": true, 00:13:53.024 "hosts": [], 00:13:53.024 "listen_addresses": [], 00:13:53.024 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:53.024 "subtype": "Discovery" 00:13:53.024 }, 00:13:53.024 { 00:13:53.024 "allow_any_host": true, 00:13:53.024 "hosts": [], 00:13:53.024 "listen_addresses": [ 00:13:53.024 { 00:13:53.024 "adrfam": "IPv4", 00:13:53.024 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:53.025 "transport": "VFIOUSER", 00:13:53.025 "trsvcid": "0", 00:13:53.025 "trtype": "VFIOUSER" 00:13:53.025 } 00:13:53.025 ], 00:13:53.025 "max_cntlid": 65519, 00:13:53.025 "max_namespaces": 32, 00:13:53.025 "min_cntlid": 1, 00:13:53.025 "model_number": "SPDK bdev Controller", 00:13:53.025 "namespaces": [ 00:13:53.025 { 00:13:53.025 "bdev_name": "Malloc1", 00:13:53.025 "name": "Malloc1", 00:13:53.025 "nguid": "BC2CD621543C45DF9F8D3A72F4694E05", 00:13:53.025 "nsid": 1, 00:13:53.025 "uuid": "bc2cd621-543c-45df-9f8d-3a72f4694e05" 00:13:53.025 }, 00:13:53.025 { 00:13:53.025 "bdev_name": "Malloc3", 00:13:53.025 "name": "Malloc3", 00:13:53.025 "nguid": "B34904128C084844BD423429FD9E6715", 00:13:53.025 "nsid": 2, 00:13:53.025 "uuid": "b3490412-8c08-4844-bd42-3429fd9e6715" 00:13:53.025 } 00:13:53.025 ], 00:13:53.025 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:53.025 "serial_number": "SPDK1", 00:13:53.025 "subtype": "NVMe" 00:13:53.025 }, 00:13:53.025 { 00:13:53.025 "allow_any_host": true, 00:13:53.025 "hosts": [], 00:13:53.025 "listen_addresses": [ 00:13:53.025 { 00:13:53.025 "adrfam": "IPv4", 00:13:53.025 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:53.025 "transport": "VFIOUSER", 00:13:53.025 "trsvcid": "0", 00:13:53.025 "trtype": "VFIOUSER" 00:13:53.025 } 00:13:53.025 ], 00:13:53.025 "max_cntlid": 65519, 00:13:53.025 "max_namespaces": 32, 00:13:53.025 "min_cntlid": 1, 00:13:53.025 "model_number": "SPDK bdev Controller", 00:13:53.025 "namespaces": [ 00:13:53.025 { 00:13:53.025 "bdev_name": "Malloc2", 00:13:53.025 "name": "Malloc2", 00:13:53.025 "nguid": "F9EC39E8D8524600995AD69AA6A74E4B", 00:13:53.025 "nsid": 1, 00:13:53.025 "uuid": "f9ec39e8-d852-4600-995a-d69aa6a74e4b" 00:13:53.025 } 00:13:53.025 ], 00:13:53.025 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:53.025 "serial_number": "SPDK2", 00:13:53.025 "subtype": "NVMe" 00:13:53.025 } 00:13:53.025 ] 00:13:53.025 02:30:33 -- target/nvmf_vfio_user.sh@44 -- # wait 71204 00:13:53.025 02:30:33 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:53.025 02:30:33 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:53.025 02:30:33 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:53.025 02:30:33 -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:53.285 [2024-11-21 02:30:33.670447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:53.285 [2024-11-21 02:30:33.670520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71241 ] 00:13:53.285 [2024-11-21 02:30:33.809658] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:53.285 [2024-11-21 02:30:33.819029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:53.285 [2024-11-21 02:30:33.819078] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f704dcb6000 00:13:53.285 [2024-11-21 02:30:33.820027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.821029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.822033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.823041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.824045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.825046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.826053] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.827058] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.285 [2024-11-21 02:30:33.828066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:53.285 [2024-11-21 02:30:33.828091] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f704dcab000 00:13:53.285 [2024-11-21 02:30:33.829259] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:53.285 [2024-11-21 02:30:33.844646] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:53.285 [2024-11-21 02:30:33.844699] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:53.285 [2024-11-21 02:30:33.848838] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:53.285 [2024-11-21 02:30:33.848916] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:53.285 [2024-11-21 02:30:33.849002] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:53.285 [2024-11-21 02:30:33.849034] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:53.285 [2024-11-21 02:30:33.849041] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:53.285 [2024-11-21 02:30:33.849858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:53.285 [2024-11-21 02:30:33.849885] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:53.285 [2024-11-21 02:30:33.849909] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:53.285 [2024-11-21 02:30:33.850845] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:53.285 [2024-11-21 02:30:33.850869] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:53.285 [2024-11-21 02:30:33.850893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:53.285 [2024-11-21 02:30:33.851869] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:53.285 [2024-11-21 02:30:33.851892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:53.285 [2024-11-21 02:30:33.852874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:53.285 [2024-11-21 02:30:33.852898] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:53.285 [2024-11-21 02:30:33.852918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:53.285 [2024-11-21 02:30:33.852928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:53.285 [2024-11-21 02:30:33.853035] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:53.285 [2024-11-21 02:30:33.853041] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:53.285 [2024-11-21 02:30:33.853062] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:53.285 [2024-11-21 02:30:33.853880] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:53.286 [2024-11-21 02:30:33.854878] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:53.286 [2024-11-21 02:30:33.855894] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:53.286 [2024-11-21 02:30:33.856926] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:53.286 [2024-11-21 02:30:33.857913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:53.286 [2024-11-21 02:30:33.857937] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:53.286 [2024-11-21 02:30:33.857956] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.857978] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:53.286 [2024-11-21 02:30:33.857995] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.858014] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:53.286 [2024-11-21 02:30:33.858020] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.286 [2024-11-21 02:30:33.858037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.866758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.866786] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:53.286 [2024-11-21 02:30:33.866805] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:53.286 [2024-11-21 02:30:33.866810] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:53.286 [2024-11-21 02:30:33.866815] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:53.286 [2024-11-21 02:30:33.866821] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:53.286 [2024-11-21 02:30:33.866826] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:53.286 [2024-11-21 02:30:33.866832] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.866849] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.866863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.873769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.873815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.286 [2024-11-21 02:30:33.873827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.286 [2024-11-21 02:30:33.873836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.286 [2024-11-21 02:30:33.873848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.286 [2024-11-21 02:30:33.873854] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.873868] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.873879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.881765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.881785] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:53.286 [2024-11-21 02:30:33.881793] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.881808] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.881821] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.881833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.889757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.889841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.889854] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.889864] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:53.286 [2024-11-21 02:30:33.889870] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:53.286 [2024-11-21 02:30:33.889878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.897765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.897809] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:53.286 [2024-11-21 02:30:33.897823] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.897833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.897842] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:53.286 [2024-11-21 02:30:33.897848] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.286 [2024-11-21 02:30:33.897855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.905757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.905791] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.905805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.905815] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:53.286 [2024-11-21 02:30:33.905821] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.286 [2024-11-21 02:30:33.905829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.913768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.913792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.913815] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.913828] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.913835] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.913841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.913847] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:53.286 [2024-11-21 02:30:33.913852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:53.286 [2024-11-21 02:30:33.913857] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:53.286 [2024-11-21 02:30:33.913881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:53.286 [2024-11-21 02:30:33.921757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:53.286 [2024-11-21 02:30:33.921809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:53.546 [2024-11-21 02:30:33.929785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:53.547 [2024-11-21 02:30:33.929815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:53.547 [2024-11-21 02:30:33.937802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:53.547 [2024-11-21 02:30:33.937840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:53.547 [2024-11-21 02:30:33.945799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:53.547 [2024-11-21 02:30:33.945829] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:53.547 [2024-11-21 02:30:33.945846] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:53.547 [2024-11-21 02:30:33.945850] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:53.547 [2024-11-21 02:30:33.945853] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:53.547 [2024-11-21 02:30:33.945861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:53.547 [2024-11-21 02:30:33.945870] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:53.547 [2024-11-21 02:30:33.945875] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:53.547 [2024-11-21 02:30:33.945881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:53.547 [2024-11-21 02:30:33.945889] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:53.547 [2024-11-21 02:30:33.945894] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.547 [2024-11-21 02:30:33.945901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.547 [2024-11-21 02:30:33.945909] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:53.547 [2024-11-21 02:30:33.945914] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:53.547 [2024-11-21 02:30:33.945920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:53.547 [2024-11-21 02:30:33.953797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:53.547 [2024-11-21 02:30:33.953846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:53.547 [2024-11-21 02:30:33.953859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:53.547 [2024-11-21 02:30:33.953868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:53.547 ===================================================== 00:13:53.547 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:53.547 ===================================================== 00:13:53.547 Controller Capabilities/Features 00:13:53.547 ================================ 00:13:53.547 Vendor ID: 4e58 00:13:53.547 Subsystem Vendor ID: 4e58 00:13:53.547 Serial Number: SPDK2 00:13:53.547 Model Number: SPDK bdev Controller 00:13:53.547 Firmware Version: 24.01.1 00:13:53.547 Recommended Arb Burst: 6 00:13:53.547 IEEE OUI Identifier: 8d 6b 50 00:13:53.547 Multi-path I/O 00:13:53.547 May have multiple subsystem ports: Yes 00:13:53.547 May have multiple controllers: Yes 00:13:53.547 Associated with SR-IOV VF: No 00:13:53.547 Max Data Transfer Size: 131072 00:13:53.547 Max Number of Namespaces: 32 00:13:53.547 Max Number of I/O Queues: 127 00:13:53.547 NVMe Specification Version (VS): 1.3 00:13:53.547 NVMe Specification Version (Identify): 1.3 00:13:53.547 Maximum Queue Entries: 256 00:13:53.547 Contiguous Queues Required: Yes 00:13:53.547 Arbitration Mechanisms Supported 00:13:53.547 Weighted Round Robin: Not Supported 00:13:53.547 Vendor Specific: Not Supported 00:13:53.547 Reset Timeout: 15000 ms 00:13:53.547 Doorbell Stride: 4 bytes 00:13:53.547 NVM Subsystem Reset: Not Supported 00:13:53.547 Command Sets Supported 00:13:53.547 NVM Command Set: Supported 00:13:53.547 Boot Partition: Not Supported 00:13:53.547 Memory Page Size Minimum: 4096 bytes 00:13:53.547 Memory Page Size Maximum: 4096 bytes 00:13:53.547 Persistent Memory Region: Not Supported 00:13:53.547 Optional Asynchronous Events Supported 00:13:53.547 Namespace Attribute Notices: Supported 00:13:53.547 Firmware Activation Notices: Not Supported 00:13:53.547 ANA Change Notices: Not Supported 00:13:53.547 PLE Aggregate Log Change Notices: Not Supported 00:13:53.547 LBA Status Info Alert Notices: Not Supported 00:13:53.547 EGE Aggregate Log Change Notices: Not Supported 00:13:53.547 Normal NVM Subsystem Shutdown event: Not Supported 00:13:53.547 Zone Descriptor Change Notices: Not Supported 00:13:53.547 Discovery Log Change Notices: Not Supported 00:13:53.547 Controller Attributes 00:13:53.547 128-bit Host Identifier: Supported 00:13:53.547 Non-Operational Permissive Mode: Not Supported 00:13:53.547 NVM Sets: Not Supported 00:13:53.547 Read Recovery Levels: Not Supported 00:13:53.547 Endurance Groups: Not Supported 00:13:53.547 Predictable Latency Mode: Not Supported 00:13:53.547 Traffic Based Keep ALive: Not Supported 00:13:53.547 Namespace Granularity: Not Supported 00:13:53.547 SQ Associations: Not Supported 00:13:53.547 UUID List: Not Supported 00:13:53.547 Multi-Domain Subsystem: Not Supported 00:13:53.547 Fixed Capacity Management: Not Supported 00:13:53.547 Variable Capacity Management: Not Supported 00:13:53.547 Delete Endurance Group: Not Supported 00:13:53.547 Delete NVM Set: Not Supported 00:13:53.547 Extended LBA Formats Supported: Not Supported 00:13:53.547 Flexible Data Placement Supported: Not Supported 00:13:53.547 00:13:53.547 Controller Memory Buffer Support 00:13:53.547 ================================ 00:13:53.547 Supported: No 00:13:53.547 00:13:53.547 Persistent Memory Region Support 00:13:53.547 ================================ 00:13:53.547 Supported: No 00:13:53.547 00:13:53.547 Admin Command Set Attributes 00:13:53.547 ============================ 00:13:53.547 Security Send/Receive: Not Supported 00:13:53.547 Format NVM: Not Supported 00:13:53.547 Firmware Activate/Download: Not Supported 00:13:53.547 Namespace Management: Not Supported 00:13:53.547 Device Self-Test: Not Supported 00:13:53.547 Directives: Not Supported 00:13:53.547 NVMe-MI: Not Supported 00:13:53.547 Virtualization Management: Not Supported 00:13:53.547 Doorbell Buffer Config: Not Supported 00:13:53.547 Get LBA Status Capability: Not Supported 00:13:53.547 Command & Feature Lockdown Capability: Not Supported 00:13:53.547 Abort Command Limit: 4 00:13:53.547 Async Event Request Limit: 4 00:13:53.547 Number of Firmware Slots: N/A 00:13:53.547 Firmware Slot 1 Read-Only: N/A 00:13:53.547 Firmware Activation Without Reset: N/A 00:13:53.547 Multiple Update Detection Support: N/A 00:13:53.547 Firmware Update Granularity: No Information Provided 00:13:53.547 Per-Namespace SMART Log: No 00:13:53.547 Asymmetric Namespace Access Log Page: Not Supported 00:13:53.547 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:53.547 Command Effects Log Page: Supported 00:13:53.547 Get Log Page Extended Data: Supported 00:13:53.547 Telemetry Log Pages: Not Supported 00:13:53.547 Persistent Event Log Pages: Not Supported 00:13:53.547 Supported Log Pages Log Page: May Support 00:13:53.547 Commands Supported & Effects Log Page: Not Supported 00:13:53.547 Feature Identifiers & Effects Log Page:May Support 00:13:53.547 NVMe-MI Commands & Effects Log Page: May Support 00:13:53.547 Data Area 4 for Telemetry Log: Not Supported 00:13:53.547 Error Log Page Entries Supported: 128 00:13:53.547 Keep Alive: Supported 00:13:53.547 Keep Alive Granularity: 10000 ms 00:13:53.547 00:13:53.547 NVM Command Set Attributes 00:13:53.547 ========================== 00:13:53.547 Submission Queue Entry Size 00:13:53.547 Max: 64 00:13:53.547 Min: 64 00:13:53.547 Completion Queue Entry Size 00:13:53.547 Max: 16 00:13:53.547 Min: 16 00:13:53.547 Number of Namespaces: 32 00:13:53.547 Compare Command: Supported 00:13:53.547 Write Uncorrectable Command: Not Supported 00:13:53.547 Dataset Management Command: Supported 00:13:53.547 Write Zeroes Command: Supported 00:13:53.547 Set Features Save Field: Not Supported 00:13:53.547 Reservations: Not Supported 00:13:53.547 Timestamp: Not Supported 00:13:53.547 Copy: Supported 00:13:53.547 Volatile Write Cache: Present 00:13:53.547 Atomic Write Unit (Normal): 1 00:13:53.547 Atomic Write Unit (PFail): 1 00:13:53.547 Atomic Compare & Write Unit: 1 00:13:53.547 Fused Compare & Write: Supported 00:13:53.547 Scatter-Gather List 00:13:53.547 SGL Command Set: Supported (Dword aligned) 00:13:53.547 SGL Keyed: Not Supported 00:13:53.547 SGL Bit Bucket Descriptor: Not Supported 00:13:53.547 SGL Metadata Pointer: Not Supported 00:13:53.547 Oversized SGL: Not Supported 00:13:53.547 SGL Metadata Address: Not Supported 00:13:53.547 SGL Offset: Not Supported 00:13:53.547 Transport SGL Data Block: Not Supported 00:13:53.547 Replay Protected Memory Block: Not Supported 00:13:53.548 00:13:53.548 Firmware Slot Information 00:13:53.548 ========================= 00:13:53.548 Active slot: 1 00:13:53.548 Slot 1 Firmware Revision: 24.01.1 00:13:53.548 00:13:53.548 00:13:53.548 Commands Supported and Effects 00:13:53.548 ============================== 00:13:53.548 Admin Commands 00:13:53.548 -------------- 00:13:53.548 Get Log Page (02h): Supported 00:13:53.548 Identify (06h): Supported 00:13:53.548 Abort (08h): Supported 00:13:53.548 Set Features (09h): Supported 00:13:53.548 Get Features (0Ah): Supported 00:13:53.548 Asynchronous Event Request (0Ch): Supported 00:13:53.548 Keep Alive (18h): Supported 00:13:53.548 I/O Commands 00:13:53.548 ------------ 00:13:53.548 Flush (00h): Supported LBA-Change 00:13:53.548 Write (01h): Supported LBA-Change 00:13:53.548 Read (02h): Supported 00:13:53.548 Compare (05h): Supported 00:13:53.548 Write Zeroes (08h): Supported LBA-Change 00:13:53.548 Dataset Management (09h): Supported LBA-Change 00:13:53.548 Copy (19h): Supported LBA-Change 00:13:53.548 Unknown (79h): Supported LBA-Change 00:13:53.548 Unknown (7Ah): Supported 00:13:53.548 00:13:53.548 Error Log 00:13:53.548 ========= 00:13:53.548 00:13:53.548 Arbitration 00:13:53.548 =========== 00:13:53.548 Arbitration Burst: 1 00:13:53.548 00:13:53.548 Power Management 00:13:53.548 ================ 00:13:53.548 Number of Power States: 1 00:13:53.548 Current Power State: Power State #0 00:13:53.548 Power State #0: 00:13:53.548 Max Power: 0.00 W 00:13:53.548 Non-Operational State: Operational 00:13:53.548 Entry Latency: Not Reported 00:13:53.548 Exit Latency: Not Reported 00:13:53.548 Relative Read Throughput: 0 00:13:53.548 Relative Read Latency: 0 00:13:53.548 Relative Write Throughput: 0 00:13:53.548 Relative Write Latency: 0 00:13:53.548 Idle Power: Not Reported 00:13:53.548 Active Power: Not Reported 00:13:53.548 Non-Operational Permissive Mode: Not Supported 00:13:53.548 00:13:53.548 Health Information 00:13:53.548 ================== 00:13:53.548 Critical Warnings: 00:13:53.548 Available Spare Space: OK 00:13:53.548 Temperature: OK 00:13:53.548 Device Reliability: OK 00:13:53.548 Read Only: No 00:13:53.548 Volatile Memory Backup: OK 00:13:53.548 Current Temperature: 0 Kelvin[2024-11-21 02:30:33.954020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:53.548 [2024-11-21 02:30:33.961789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:53.548 [2024-11-21 02:30:33.961852] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:53.548 [2024-11-21 02:30:33.961866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.548 [2024-11-21 02:30:33.961874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.548 [2024-11-21 02:30:33.961881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.548 [2024-11-21 02:30:33.961888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.548 [2024-11-21 02:30:33.961958] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:53.548 [2024-11-21 02:30:33.961977] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:53.548 [2024-11-21 02:30:33.963008] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:53.548 [2024-11-21 02:30:33.963027] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:53.548 [2024-11-21 02:30:33.963965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:53.548 [2024-11-21 02:30:33.963994] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:53.548 [2024-11-21 02:30:33.964111] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:53.548 [2024-11-21 02:30:33.965400] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:53.548 (-273 Celsius) 00:13:53.548 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:53.548 Available Spare: 0% 00:13:53.548 Available Spare Threshold: 0% 00:13:53.548 Life Percentage Used: 0% 00:13:53.548 Data Units Read: 0 00:13:53.548 Data Units Written: 0 00:13:53.548 Host Read Commands: 0 00:13:53.548 Host Write Commands: 0 00:13:53.548 Controller Busy Time: 0 minutes 00:13:53.548 Power Cycles: 0 00:13:53.548 Power On Hours: 0 hours 00:13:53.548 Unsafe Shutdowns: 0 00:13:53.548 Unrecoverable Media Errors: 0 00:13:53.548 Lifetime Error Log Entries: 0 00:13:53.548 Warning Temperature Time: 0 minutes 00:13:53.548 Critical Temperature Time: 0 minutes 00:13:53.548 00:13:53.548 Number of Queues 00:13:53.548 ================ 00:13:53.548 Number of I/O Submission Queues: 127 00:13:53.548 Number of I/O Completion Queues: 127 00:13:53.548 00:13:53.548 Active Namespaces 00:13:53.548 ================= 00:13:53.548 Namespace ID:1 00:13:53.548 Error Recovery Timeout: Unlimited 00:13:53.548 Command Set Identifier: NVM (00h) 00:13:53.548 Deallocate: Supported 00:13:53.548 Deallocated/Unwritten Error: Not Supported 00:13:53.548 Deallocated Read Value: Unknown 00:13:53.548 Deallocate in Write Zeroes: Not Supported 00:13:53.548 Deallocated Guard Field: 0xFFFF 00:13:53.548 Flush: Supported 00:13:53.548 Reservation: Supported 00:13:53.548 Namespace Sharing Capabilities: Multiple Controllers 00:13:53.548 Size (in LBAs): 131072 (0GiB) 00:13:53.548 Capacity (in LBAs): 131072 (0GiB) 00:13:53.548 Utilization (in LBAs): 131072 (0GiB) 00:13:53.548 NGUID: F9EC39E8D8524600995AD69AA6A74E4B 00:13:53.548 UUID: f9ec39e8-d852-4600-995a-d69aa6a74e4b 00:13:53.548 Thin Provisioning: Not Supported 00:13:53.548 Per-NS Atomic Units: Yes 00:13:53.548 Atomic Boundary Size (Normal): 0 00:13:53.548 Atomic Boundary Size (PFail): 0 00:13:53.548 Atomic Boundary Offset: 0 00:13:53.548 Maximum Single Source Range Length: 65535 00:13:53.548 Maximum Copy Length: 65535 00:13:53.548 Maximum Source Range Count: 1 00:13:53.548 NGUID/EUI64 Never Reused: No 00:13:53.548 Namespace Write Protected: No 00:13:53.548 Number of LBA Formats: 1 00:13:53.548 Current LBA Format: LBA Format #00 00:13:53.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:53.548 00:13:53.548 02:30:34 -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:58.818 Initializing NVMe Controllers 00:13:58.818 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:58.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:58.818 Initialization complete. Launching workers. 00:13:58.818 ======================================================== 00:13:58.818 Latency(us) 00:13:58.818 Device Information : IOPS MiB/s Average min max 00:13:58.818 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33327.19 130.18 3842.85 1173.03 10311.08 00:13:58.818 ======================================================== 00:13:58.818 Total : 33327.19 130.18 3842.85 1173.03 10311.08 00:13:58.818 00:13:58.818 02:30:39 -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:05.383 Initializing NVMe Controllers 00:14:05.383 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:05.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:05.383 Initialization complete. Launching workers. 00:14:05.383 ======================================================== 00:14:05.383 Latency(us) 00:14:05.383 Device Information : IOPS MiB/s Average min max 00:14:05.383 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34258.58 133.82 3736.24 1163.26 10664.63 00:14:05.383 ======================================================== 00:14:05.383 Total : 34258.58 133.82 3736.24 1163.26 10664.63 00:14:05.383 00:14:05.383 02:30:44 -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:09.573 Initializing NVMe Controllers 00:14:09.574 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.574 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:09.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:09.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:09.574 Initialization complete. Launching workers. 00:14:09.574 Starting thread on core 2 00:14:09.574 Starting thread on core 3 00:14:09.574 Starting thread on core 1 00:14:09.574 02:30:50 -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:13.789 Initializing NVMe Controllers 00:14:13.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:13.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:13.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:13.789 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:13.789 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:14:13.789 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:13.789 Initialization complete. Launching workers. 00:14:13.789 Starting thread on core 1 with urgent priority queue 00:14:13.789 Starting thread on core 2 with urgent priority queue 00:14:13.789 Starting thread on core 3 with urgent priority queue 00:14:13.789 Starting thread on core 0 with urgent priority queue 00:14:13.789 SPDK bdev Controller (SPDK2 ) core 0: 4621.67 IO/s 21.64 secs/100000 ios 00:14:13.789 SPDK bdev Controller (SPDK2 ) core 1: 4922.67 IO/s 20.31 secs/100000 ios 00:14:13.789 SPDK bdev Controller (SPDK2 ) core 2: 3909.33 IO/s 25.58 secs/100000 ios 00:14:13.789 SPDK bdev Controller (SPDK2 ) core 3: 3984.33 IO/s 25.10 secs/100000 ios 00:14:13.789 ======================================================== 00:14:13.789 00:14:13.789 02:30:53 -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:13.789 Initializing NVMe Controllers 00:14:13.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.789 Namespace ID: 1 size: 0GB 00:14:13.789 Initialization complete. 00:14:13.789 INFO: using host memory buffer for IO 00:14:13.789 Hello world! 00:14:13.789 02:30:53 -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:14.725 Initializing NVMe Controllers 00:14:14.725 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:14.725 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:14.725 Initialization complete. Launching workers. 00:14:14.725 submit (in ns) avg, min, max = 7606.6, 3744.5, 6033412.7 00:14:14.725 complete (in ns) avg, min, max = 31321.4, 2048.2, 7014789.1 00:14:14.725 00:14:14.725 Submit histogram 00:14:14.725 ================ 00:14:14.725 Range in us Cumulative Count 00:14:14.725 3.724 - 3.753: 0.0188% ( 2) 00:14:14.725 3.753 - 3.782: 0.3661% ( 37) 00:14:14.725 3.782 - 3.811: 6.2699% ( 629) 00:14:14.725 3.811 - 3.840: 18.7347% ( 1328) 00:14:14.725 3.840 - 3.869: 34.6443% ( 1695) 00:14:14.725 3.869 - 3.898: 47.1560% ( 1333) 00:14:14.725 3.898 - 3.927: 58.9356% ( 1255) 00:14:14.725 3.927 - 3.956: 69.4293% ( 1118) 00:14:14.725 3.956 - 3.985: 77.1541% ( 823) 00:14:14.725 3.985 - 4.015: 80.6364% ( 371) 00:14:14.725 4.015 - 4.044: 83.4335% ( 298) 00:14:14.725 4.044 - 4.073: 84.9071% ( 157) 00:14:14.725 4.073 - 4.102: 86.3807% ( 157) 00:14:14.725 4.102 - 4.131: 87.5352% ( 123) 00:14:14.725 4.131 - 4.160: 88.5864% ( 112) 00:14:14.725 4.160 - 4.189: 89.7691% ( 126) 00:14:14.725 4.189 - 4.218: 91.4680% ( 181) 00:14:14.725 4.218 - 4.247: 93.3452% ( 200) 00:14:14.725 4.247 - 4.276: 94.7250% ( 147) 00:14:14.725 4.276 - 4.305: 95.8513% ( 120) 00:14:14.725 4.305 - 4.335: 96.8181% ( 103) 00:14:14.725 4.335 - 4.364: 97.4376% ( 66) 00:14:14.725 4.364 - 4.393: 97.6910% ( 27) 00:14:14.725 4.393 - 4.422: 97.9257% ( 25) 00:14:14.725 4.422 - 4.451: 98.0571% ( 14) 00:14:14.725 4.451 - 4.480: 98.1415% ( 9) 00:14:14.725 4.480 - 4.509: 98.1791% ( 4) 00:14:14.725 4.509 - 4.538: 98.1885% ( 1) 00:14:14.725 4.713 - 4.742: 98.1979% ( 1) 00:14:14.725 4.742 - 4.771: 98.2072% ( 1) 00:14:14.725 4.800 - 4.829: 98.2260% ( 2) 00:14:14.725 4.887 - 4.916: 98.2354% ( 1) 00:14:14.725 4.945 - 4.975: 98.2448% ( 1) 00:14:14.725 5.004 - 5.033: 98.2636% ( 2) 00:14:14.725 5.033 - 5.062: 98.3293% ( 7) 00:14:14.725 5.062 - 5.091: 98.3480% ( 2) 00:14:14.725 5.091 - 5.120: 98.4044% ( 6) 00:14:14.725 5.120 - 5.149: 98.4231% ( 2) 00:14:14.725 5.149 - 5.178: 98.4701% ( 5) 00:14:14.725 5.178 - 5.207: 98.5264% ( 6) 00:14:14.725 5.207 - 5.236: 98.6109% ( 9) 00:14:14.725 5.236 - 5.265: 98.6390% ( 3) 00:14:14.725 5.265 - 5.295: 98.6578% ( 2) 00:14:14.725 5.295 - 5.324: 98.7329% ( 8) 00:14:14.725 5.324 - 5.353: 98.7704% ( 4) 00:14:14.725 5.353 - 5.382: 98.8361% ( 7) 00:14:14.725 5.382 - 5.411: 98.8643% ( 3) 00:14:14.725 5.411 - 5.440: 98.9018% ( 4) 00:14:14.725 5.440 - 5.469: 98.9769% ( 8) 00:14:14.725 5.469 - 5.498: 99.0145% ( 4) 00:14:14.725 5.498 - 5.527: 99.0426% ( 3) 00:14:14.725 5.527 - 5.556: 99.0614% ( 2) 00:14:14.725 5.556 - 5.585: 99.0895% ( 3) 00:14:14.725 5.585 - 5.615: 99.1083% ( 2) 00:14:14.725 5.615 - 5.644: 99.1271% ( 2) 00:14:14.725 5.644 - 5.673: 99.1365% ( 1) 00:14:14.725 5.673 - 5.702: 99.1552% ( 2) 00:14:14.725 5.702 - 5.731: 99.1740% ( 2) 00:14:14.725 5.731 - 5.760: 99.1834% ( 1) 00:14:14.725 5.760 - 5.789: 99.2022% ( 2) 00:14:14.725 5.789 - 5.818: 99.2209% ( 2) 00:14:14.725 6.022 - 6.051: 99.2303% ( 1) 00:14:14.725 6.051 - 6.080: 99.2397% ( 1) 00:14:14.725 9.367 - 9.425: 99.2491% ( 1) 00:14:14.725 9.658 - 9.716: 99.2585% ( 1) 00:14:14.725 9.833 - 9.891: 99.2679% ( 1) 00:14:14.725 9.949 - 10.007: 99.2867% ( 2) 00:14:14.725 10.007 - 10.065: 99.2960% ( 1) 00:14:14.725 10.065 - 10.124: 99.3148% ( 2) 00:14:14.725 10.124 - 10.182: 99.3242% ( 1) 00:14:14.725 10.182 - 10.240: 99.3430% ( 2) 00:14:14.725 10.240 - 10.298: 99.3524% ( 1) 00:14:14.725 10.298 - 10.356: 99.3617% ( 1) 00:14:14.725 10.473 - 10.531: 99.3899% ( 3) 00:14:14.725 10.531 - 10.589: 99.3993% ( 1) 00:14:14.725 10.589 - 10.647: 99.4087% ( 1) 00:14:14.725 10.705 - 10.764: 99.4181% ( 1) 00:14:14.725 10.764 - 10.822: 99.4368% ( 2) 00:14:14.725 10.822 - 10.880: 99.4556% ( 2) 00:14:14.725 10.938 - 10.996: 99.4650% ( 1) 00:14:14.725 11.113 - 11.171: 99.4744% ( 1) 00:14:14.725 11.171 - 11.229: 99.4931% ( 2) 00:14:14.725 11.229 - 11.287: 99.5025% ( 1) 00:14:14.725 11.345 - 11.404: 99.5119% ( 1) 00:14:14.725 11.404 - 11.462: 99.5213% ( 1) 00:14:14.725 11.520 - 11.578: 99.5307% ( 1) 00:14:14.725 11.636 - 11.695: 99.5495% ( 2) 00:14:14.725 11.695 - 11.753: 99.5589% ( 1) 00:14:14.725 11.753 - 11.811: 99.5682% ( 1) 00:14:14.725 11.927 - 11.985: 99.5776% ( 1) 00:14:14.725 11.985 - 12.044: 99.5870% ( 1) 00:14:14.726 12.044 - 12.102: 99.5964% ( 1) 00:14:14.726 12.160 - 12.218: 99.6058% ( 1) 00:14:14.726 12.335 - 12.393: 99.6152% ( 1) 00:14:14.726 12.393 - 12.451: 99.6246% ( 1) 00:14:14.726 12.451 - 12.509: 99.6339% ( 1) 00:14:14.726 12.742 - 12.800: 99.6433% ( 1) 00:14:14.726 12.858 - 12.916: 99.6527% ( 1) 00:14:14.726 13.091 - 13.149: 99.6621% ( 1) 00:14:14.726 13.207 - 13.265: 99.6715% ( 1) 00:14:14.726 13.440 - 13.498: 99.6809% ( 1) 00:14:14.726 13.498 - 13.556: 99.6903% ( 1) 00:14:14.726 13.847 - 13.905: 99.6996% ( 1) 00:14:14.726 14.022 - 14.080: 99.7184% ( 2) 00:14:14.726 14.778 - 14.836: 99.7278% ( 1) 00:14:14.726 15.011 - 15.127: 99.7372% ( 1) 00:14:14.726 15.127 - 15.244: 99.7466% ( 1) 00:14:14.726 15.360 - 15.476: 99.7560% ( 1) 00:14:14.726 16.175 - 16.291: 99.7653% ( 1) 00:14:14.726 18.385 - 18.502: 99.7747% ( 1) 00:14:14.726 19.084 - 19.200: 99.7841% ( 1) 00:14:14.726 19.316 - 19.433: 99.7935% ( 1) 00:14:14.726 19.549 - 19.665: 99.8029% ( 1) 00:14:14.726 20.131 - 20.247: 99.8123% ( 1) 00:14:14.726 20.480 - 20.596: 99.8217% ( 1) 00:14:14.726 20.596 - 20.713: 99.8310% ( 1) 00:14:14.726 20.713 - 20.829: 99.8404% ( 1) 00:14:14.726 20.945 - 21.062: 99.8592% ( 2) 00:14:14.726 22.807 - 22.924: 99.8686% ( 1) 00:14:14.726 23.389 - 23.505: 99.8780% ( 1) 00:14:14.726 25.600 - 25.716: 99.8874% ( 1) 00:14:14.726 31.651 - 31.884: 99.8968% ( 1) 00:14:14.726 32.582 - 32.815: 99.9061% ( 1) 00:14:14.726 32.815 - 33.047: 99.9155% ( 1) 00:14:14.726 3991.738 - 4021.527: 99.9625% ( 5) 00:14:14.726 4021.527 - 4051.316: 99.9906% ( 3) 00:14:14.726 6017.396 - 6047.185: 100.0000% ( 1) 00:14:14.726 00:14:14.726 Complete histogram 00:14:14.726 ================== 00:14:14.726 Range in us Cumulative Count 00:14:14.726 2.036 - 2.051: 0.0375% ( 4) 00:14:14.726 2.051 - 2.065: 17.4770% ( 1858) 00:14:14.726 2.065 - 2.080: 75.6523% ( 6198) 00:14:14.726 2.080 - 2.095: 87.3756% ( 1249) 00:14:14.726 2.095 - 2.109: 88.6897% ( 140) 00:14:14.726 2.109 - 2.124: 89.1308% ( 47) 00:14:14.726 2.124 - 2.138: 90.8485% ( 183) 00:14:14.726 2.138 - 2.153: 94.4903% ( 388) 00:14:14.726 2.153 - 2.167: 95.8701% ( 147) 00:14:14.726 2.167 - 2.182: 96.6116% ( 79) 00:14:14.726 2.182 - 2.196: 97.0152% ( 43) 00:14:14.726 2.196 - 2.211: 97.4282% ( 44) 00:14:14.726 2.211 - 2.225: 97.8506% ( 45) 00:14:14.726 2.225 - 2.240: 98.1228% ( 29) 00:14:14.726 2.240 - 2.255: 98.2072% ( 9) 00:14:14.726 2.255 - 2.269: 98.3293% ( 13) 00:14:14.726 2.269 - 2.284: 98.3762% ( 5) 00:14:14.726 2.284 - 2.298: 98.4044% ( 3) 00:14:14.726 2.298 - 2.313: 98.4325% ( 3) 00:14:14.726 2.313 - 2.327: 98.4513% ( 2) 00:14:14.726 2.327 - 2.342: 98.4701% ( 2) 00:14:14.726 2.371 - 2.385: 98.4794% ( 1) 00:14:14.726 2.400 - 2.415: 98.4888% ( 1) 00:14:14.726 2.415 - 2.429: 98.4982% ( 1) 00:14:14.726 2.429 - 2.444: 98.5076% ( 1) 00:14:14.726 2.444 - 2.458: 98.5264% ( 2) 00:14:14.726 2.473 - 2.487: 98.5639% ( 4) 00:14:14.726 2.502 - 2.516: 98.5733% ( 1) 00:14:14.726 2.531 - 2.545: 98.5827% ( 1) 00:14:14.726 2.604 - 2.618: 98.5921% ( 1) 00:14:14.726 2.618 - 2.633: 98.6015% ( 1) 00:14:14.726 3.724 - 3.753: 98.6109% ( 1) 00:14:14.726 3.898 - 3.927: 98.6202% ( 1) 00:14:14.726 3.956 - 3.985: 98.6578% ( 4) 00:14:14.726 4.015 - 4.044: 98.6859% ( 3) 00:14:14.726 4.044 - 4.073: 98.6953% ( 1) 00:14:14.726 4.131 - 4.160: 98.7516% ( 6) 00:14:14.726 4.160 - 4.189: 98.7704% ( 2) 00:14:14.726 4.218 - 4.247: 98.7798% ( 1) 00:14:14.726 4.247 - 4.276: 98.7892% ( 1) 00:14:14.726 4.276 - 4.305: 98.7986% ( 1) 00:14:14.726 4.305 - 4.335: 98.8267% ( 3) 00:14:14.726 4.422 - 4.451: 98.8361% ( 1) 00:14:14.726 4.480 - 4.509: 98.8455% ( 1) 00:14:14.726 4.567 - 4.596: 98.8549% ( 1) 00:14:14.726 4.596 - 4.625: 98.8830% ( 3) 00:14:14.726 4.655 - 4.684: 98.8924% ( 1) 00:14:14.726 4.684 - 4.713: 98.9018% ( 1) 00:14:14.726 4.800 - 4.829: 98.9112% ( 1) 00:14:14.726 4.975 - 5.004: 98.9206% ( 1) 00:14:14.726 5.004 - 5.033: 98.9300% ( 1) 00:14:14.726 5.120 - 5.149: 98.9394% ( 1) 00:14:14.726 5.178 - 5.207: 98.9488% ( 1) 00:14:14.726 5.295 - 5.324: 98.9581% ( 1) 00:14:14.726 5.527 - 5.556: 98.9675% ( 1) 00:14:14.726 7.738 - 7.796: 98.9769% ( 1) 00:14:14.726 8.029 - 8.087: 98.9863% ( 1) 00:14:14.726 8.204 - 8.262: 98.9957% ( 1) 00:14:14.726 8.262 - 8.320: 99.0051% ( 1) 00:14:14.726 8.378 - 8.436: 99.0145% ( 1) 00:14:14.726 8.785 - 8.844: 99.0238% ( 1) 00:14:14.726 9.018 - 9.076: 99.0332% ( 1) 00:14:14.726 9.076 - 9.135: 99.0426% ( 1) 00:14:14.726 9.484 - 9.542: 99.0520% ( 1) 00:14:14.726 9.775 - 9.833: 99.0614% ( 1) 00:14:14.726 10.065 - 10.124: 99.0802% ( 2) 00:14:14.726 10.124 - 10.182: 99.0895% ( 1) 00:14:14.726 10.356 - 10.415: 99.1083% ( 2) 00:14:14.726 10.415 - 10.473: 99.1177% ( 1) 00:14:14.726 10.589 - 10.647: 99.1271% ( 1) 00:14:14.726 10.764 - 10.822: 99.1365% ( 1) 00:14:14.726 10.822 - 10.880: 99.1459% ( 1) 00:14:14.726 10.938 - 10.996: 99.1552% ( 1) 00:14:14.726 11.287 - 11.345: 99.1646% ( 1) 00:14:14.726 11.404 - 11.462: 99.1740% ( 1) 00:14:14.726 11.578 - 11.636: 99.1834% ( 1) 00:14:14.726 12.684 - 12.742: 99.1928% ( 1) 00:14:14.726 15.244 - 15.360: 99.2022% ( 1) 00:14:14.726 15.476 - 15.593: 99.2116% ( 1) 00:14:14.726 16.989 - 17.105: 99.2209% ( 1) 00:14:14.726 17.105 - 17.222: 99.2303% ( 1) 00:14:14.726 17.222 - 17.338: 99.2397% ( 1) 00:14:14.726 17.571 - 17.687: 99.2491% ( 1) 00:14:14.726 18.502 - 18.618: 99.2585% ( 1) 00:14:14.726 19.084 - 19.200: 99.2679% ( 1) 00:14:14.726 22.924 - 23.040: 99.2773% ( 1) 00:14:14.726 3038.487 - 3053.382: 99.2960% ( 2) 00:14:14.726 3053.382 - 3068.276: 99.3054% ( 1) 00:14:14.726 3961.949 - 3991.738: 99.3430% ( 4) 00:14:14.726 3991.738 - 4021.527: 99.8498% ( 54) 00:14:14.726 4021.527 - 4051.316: 99.9625% ( 12) 00:14:14.726 4051.316 - 4081.105: 99.9718% ( 1) 00:14:14.726 5004.567 - 5034.356: 99.9812% ( 1) 00:14:14.726 5034.356 - 5064.145: 99.9906% ( 1) 00:14:14.726 7000.436 - 7030.225: 100.0000% ( 1) 00:14:14.726 00:14:14.726 02:30:55 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:14.726 02:30:55 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:14.726 02:30:55 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:14.726 02:30:55 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:14.726 02:30:55 -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:14.986 [ 00:14:14.986 { 00:14:14.986 "allow_any_host": true, 00:14:14.986 "hosts": [], 00:14:14.986 "listen_addresses": [], 00:14:14.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:14.986 "subtype": "Discovery" 00:14:14.986 }, 00:14:14.986 { 00:14:14.986 "allow_any_host": true, 00:14:14.986 "hosts": [], 00:14:14.986 "listen_addresses": [ 00:14:14.986 { 00:14:14.986 "adrfam": "IPv4", 00:14:14.986 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:14.986 "transport": "VFIOUSER", 00:14:14.986 "trsvcid": "0", 00:14:14.986 "trtype": "VFIOUSER" 00:14:14.986 } 00:14:14.986 ], 00:14:14.986 "max_cntlid": 65519, 00:14:14.986 "max_namespaces": 32, 00:14:14.986 "min_cntlid": 1, 00:14:14.986 "model_number": "SPDK bdev Controller", 00:14:14.986 "namespaces": [ 00:14:14.986 { 00:14:14.986 "bdev_name": "Malloc1", 00:14:14.986 "name": "Malloc1", 00:14:14.986 "nguid": "BC2CD621543C45DF9F8D3A72F4694E05", 00:14:14.986 "nsid": 1, 00:14:14.986 "uuid": "bc2cd621-543c-45df-9f8d-3a72f4694e05" 00:14:14.986 }, 00:14:14.986 { 00:14:14.986 "bdev_name": "Malloc3", 00:14:14.986 "name": "Malloc3", 00:14:14.986 "nguid": "B34904128C084844BD423429FD9E6715", 00:14:14.986 "nsid": 2, 00:14:14.986 "uuid": "b3490412-8c08-4844-bd42-3429fd9e6715" 00:14:14.986 } 00:14:14.986 ], 00:14:14.986 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:14.986 "serial_number": "SPDK1", 00:14:14.986 "subtype": "NVMe" 00:14:14.986 }, 00:14:14.986 { 00:14:14.986 "allow_any_host": true, 00:14:14.986 "hosts": [], 00:14:14.986 "listen_addresses": [ 00:14:14.986 { 00:14:14.986 "adrfam": "IPv4", 00:14:14.986 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:14.986 "transport": "VFIOUSER", 00:14:14.986 "trsvcid": "0", 00:14:14.986 "trtype": "VFIOUSER" 00:14:14.986 } 00:14:14.986 ], 00:14:14.986 "max_cntlid": 65519, 00:14:14.986 "max_namespaces": 32, 00:14:14.986 "min_cntlid": 1, 00:14:14.986 "model_number": "SPDK bdev Controller", 00:14:14.986 "namespaces": [ 00:14:14.986 { 00:14:14.986 "bdev_name": "Malloc2", 00:14:14.986 "name": "Malloc2", 00:14:14.986 "nguid": "F9EC39E8D8524600995AD69AA6A74E4B", 00:14:14.986 "nsid": 1, 00:14:14.986 "uuid": "f9ec39e8-d852-4600-995a-d69aa6a74e4b" 00:14:14.986 } 00:14:14.986 ], 00:14:14.986 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:14.986 "serial_number": "SPDK2", 00:14:14.986 "subtype": "NVMe" 00:14:14.986 } 00:14:14.986 ] 00:14:14.986 02:30:55 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:14.986 02:30:55 -- target/nvmf_vfio_user.sh@34 -- # aerpid=71491 00:14:14.986 02:30:55 -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:14.986 02:30:55 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:14.986 02:30:55 -- common/autotest_common.sh@1254 -- # local i=0 00:14:14.986 02:30:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:14.986 02:30:55 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:14:14.986 02:30:55 -- common/autotest_common.sh@1257 -- # i=1 00:14:14.986 02:30:55 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:15.245 02:30:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:15.245 02:30:55 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:14:15.245 02:30:55 -- common/autotest_common.sh@1257 -- # i=2 00:14:15.245 02:30:55 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:14:15.245 02:30:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:15.245 02:30:55 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:15.245 02:30:55 -- common/autotest_common.sh@1265 -- # return 0 00:14:15.245 02:30:55 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:15.245 02:30:55 -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:15.503 Malloc4 00:14:15.503 02:30:56 -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:16.070 02:30:56 -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:16.070 Asynchronous Event Request test 00:14:16.070 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.070 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:16.070 Registering asynchronous event callbacks... 00:14:16.070 Starting namespace attribute notice tests for all controllers... 00:14:16.070 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:16.070 aer_cb - Changed Namespace 00:14:16.070 Cleaning up... 00:14:16.070 [ 00:14:16.070 { 00:14:16.070 "allow_any_host": true, 00:14:16.070 "hosts": [], 00:14:16.070 "listen_addresses": [], 00:14:16.070 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:16.070 "subtype": "Discovery" 00:14:16.070 }, 00:14:16.070 { 00:14:16.070 "allow_any_host": true, 00:14:16.070 "hosts": [], 00:14:16.070 "listen_addresses": [ 00:14:16.070 { 00:14:16.070 "adrfam": "IPv4", 00:14:16.070 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:16.070 "transport": "VFIOUSER", 00:14:16.070 "trsvcid": "0", 00:14:16.070 "trtype": "VFIOUSER" 00:14:16.070 } 00:14:16.070 ], 00:14:16.070 "max_cntlid": 65519, 00:14:16.070 "max_namespaces": 32, 00:14:16.070 "min_cntlid": 1, 00:14:16.070 "model_number": "SPDK bdev Controller", 00:14:16.070 "namespaces": [ 00:14:16.070 { 00:14:16.070 "bdev_name": "Malloc1", 00:14:16.070 "name": "Malloc1", 00:14:16.070 "nguid": "BC2CD621543C45DF9F8D3A72F4694E05", 00:14:16.070 "nsid": 1, 00:14:16.070 "uuid": "bc2cd621-543c-45df-9f8d-3a72f4694e05" 00:14:16.070 }, 00:14:16.070 { 00:14:16.070 "bdev_name": "Malloc3", 00:14:16.070 "name": "Malloc3", 00:14:16.070 "nguid": "B34904128C084844BD423429FD9E6715", 00:14:16.070 "nsid": 2, 00:14:16.070 "uuid": "b3490412-8c08-4844-bd42-3429fd9e6715" 00:14:16.070 } 00:14:16.071 ], 00:14:16.071 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:16.071 "serial_number": "SPDK1", 00:14:16.071 "subtype": "NVMe" 00:14:16.071 }, 00:14:16.071 { 00:14:16.071 "allow_any_host": true, 00:14:16.071 "hosts": [], 00:14:16.071 "listen_addresses": [ 00:14:16.071 { 00:14:16.071 "adrfam": "IPv4", 00:14:16.071 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:16.071 "transport": "VFIOUSER", 00:14:16.071 "trsvcid": "0", 00:14:16.071 "trtype": "VFIOUSER" 00:14:16.071 } 00:14:16.071 ], 00:14:16.071 "max_cntlid": 65519, 00:14:16.071 "max_namespaces": 32, 00:14:16.071 "min_cntlid": 1, 00:14:16.071 "model_number": "SPDK bdev Controller", 00:14:16.071 "namespaces": [ 00:14:16.071 { 00:14:16.071 "bdev_name": "Malloc2", 00:14:16.071 "name": "Malloc2", 00:14:16.071 "nguid": "F9EC39E8D8524600995AD69AA6A74E4B", 00:14:16.071 "nsid": 1, 00:14:16.071 "uuid": "f9ec39e8-d852-4600-995a-d69aa6a74e4b" 00:14:16.071 }, 00:14:16.071 { 00:14:16.071 "bdev_name": "Malloc4", 00:14:16.071 "name": "Malloc4", 00:14:16.071 "nguid": "B0655C967A2E45FD8E65734F4C7E4747", 00:14:16.071 "nsid": 2, 00:14:16.071 "uuid": "b0655c96-7a2e-45fd-8e65-734f4c7e4747" 00:14:16.071 } 00:14:16.071 ], 00:14:16.071 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:16.071 "serial_number": "SPDK2", 00:14:16.071 "subtype": "NVMe" 00:14:16.071 } 00:14:16.071 ] 00:14:16.330 02:30:56 -- target/nvmf_vfio_user.sh@44 -- # wait 71491 00:14:16.330 02:30:56 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:16.330 02:30:56 -- target/nvmf_vfio_user.sh@95 -- # killprocess 70809 00:14:16.330 02:30:56 -- common/autotest_common.sh@936 -- # '[' -z 70809 ']' 00:14:16.330 02:30:56 -- common/autotest_common.sh@940 -- # kill -0 70809 00:14:16.330 02:30:56 -- common/autotest_common.sh@941 -- # uname 00:14:16.330 02:30:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.330 02:30:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70809 00:14:16.330 killing process with pid 70809 00:14:16.330 02:30:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:16.330 02:30:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:16.330 02:30:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70809' 00:14:16.330 02:30:56 -- common/autotest_common.sh@955 -- # kill 70809 00:14:16.330 [2024-11-21 02:30:56.755226] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:16.330 02:30:56 -- common/autotest_common.sh@960 -- # wait 70809 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:16.898 Process pid: 71544 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=71544 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 71544' 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.898 02:30:57 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 71544 00:14:16.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.898 02:30:57 -- common/autotest_common.sh@829 -- # '[' -z 71544 ']' 00:14:16.898 02:30:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.898 02:30:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.898 02:30:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.898 02:30:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.898 02:30:57 -- common/autotest_common.sh@10 -- # set +x 00:14:16.898 [2024-11-21 02:30:57.295938] thread.c:2929:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:16.898 [2024-11-21 02:30:57.297365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:16.898 [2024-11-21 02:30:57.297435] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.898 [2024-11-21 02:30:57.431393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.157 [2024-11-21 02:30:57.544422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:17.157 [2024-11-21 02:30:57.544582] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.157 [2024-11-21 02:30:57.544596] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.157 [2024-11-21 02:30:57.544606] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.157 [2024-11-21 02:30:57.544773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.157 [2024-11-21 02:30:57.545044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.157 [2024-11-21 02:30:57.545639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.157 [2024-11-21 02:30:57.545654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.157 [2024-11-21 02:30:57.635854] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:14:17.157 [2024-11-21 02:30:57.645047] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:14:17.157 [2024-11-21 02:30:57.645266] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:14:17.157 [2024-11-21 02:30:57.645884] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:17.157 [2024-11-21 02:30:57.646016] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:14:18.093 02:30:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.093 02:30:58 -- common/autotest_common.sh@862 -- # return 0 00:14:18.093 02:30:58 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:19.028 02:30:59 -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:19.028 02:30:59 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:19.028 02:30:59 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:19.028 02:30:59 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:19.028 02:30:59 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:19.286 02:30:59 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:19.545 Malloc1 00:14:19.545 02:30:59 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:19.804 02:31:00 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:20.063 02:31:00 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:20.322 02:31:00 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:20.322 02:31:00 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:20.322 02:31:00 -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:20.580 Malloc2 00:14:20.580 02:31:01 -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:20.838 02:31:01 -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:21.097 02:31:01 -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:21.356 02:31:01 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:21.356 02:31:01 -- target/nvmf_vfio_user.sh@95 -- # killprocess 71544 00:14:21.356 02:31:01 -- common/autotest_common.sh@936 -- # '[' -z 71544 ']' 00:14:21.356 02:31:01 -- common/autotest_common.sh@940 -- # kill -0 71544 00:14:21.356 02:31:01 -- common/autotest_common.sh@941 -- # uname 00:14:21.356 02:31:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.356 02:31:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71544 00:14:21.356 killing process with pid 71544 00:14:21.356 02:31:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:21.356 02:31:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:21.356 02:31:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71544' 00:14:21.356 02:31:01 -- common/autotest_common.sh@955 -- # kill 71544 00:14:21.356 02:31:01 -- common/autotest_common.sh@960 -- # wait 71544 00:14:21.614 02:31:02 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:21.615 02:31:02 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:21.615 ************************************ 00:14:21.615 END TEST nvmf_vfio_user 00:14:21.615 ************************************ 00:14:21.615 00:14:21.615 real 0m56.072s 00:14:21.615 user 3m40.749s 00:14:21.615 sys 0m3.823s 00:14:21.615 02:31:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:21.615 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:14:21.615 02:31:02 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:21.615 02:31:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:21.615 02:31:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.615 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:14:21.615 ************************************ 00:14:21.615 START TEST nvmf_vfio_user_nvme_compliance 00:14:21.615 ************************************ 00:14:21.615 02:31:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:21.874 * Looking for test storage... 00:14:21.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:14:21.874 02:31:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:21.874 02:31:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:21.874 02:31:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:21.874 02:31:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:21.874 02:31:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:21.874 02:31:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:21.874 02:31:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:21.874 02:31:02 -- scripts/common.sh@335 -- # IFS=.-: 00:14:21.874 02:31:02 -- scripts/common.sh@335 -- # read -ra ver1 00:14:21.874 02:31:02 -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.874 02:31:02 -- scripts/common.sh@336 -- # read -ra ver2 00:14:21.874 02:31:02 -- scripts/common.sh@337 -- # local 'op=<' 00:14:21.874 02:31:02 -- scripts/common.sh@339 -- # ver1_l=2 00:14:21.874 02:31:02 -- scripts/common.sh@340 -- # ver2_l=1 00:14:21.874 02:31:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:21.874 02:31:02 -- scripts/common.sh@343 -- # case "$op" in 00:14:21.874 02:31:02 -- scripts/common.sh@344 -- # : 1 00:14:21.874 02:31:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:21.874 02:31:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.874 02:31:02 -- scripts/common.sh@364 -- # decimal 1 00:14:21.874 02:31:02 -- scripts/common.sh@352 -- # local d=1 00:14:21.874 02:31:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.874 02:31:02 -- scripts/common.sh@354 -- # echo 1 00:14:21.874 02:31:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:21.874 02:31:02 -- scripts/common.sh@365 -- # decimal 2 00:14:21.874 02:31:02 -- scripts/common.sh@352 -- # local d=2 00:14:21.874 02:31:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.874 02:31:02 -- scripts/common.sh@354 -- # echo 2 00:14:21.874 02:31:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:21.874 02:31:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:21.874 02:31:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:21.874 02:31:02 -- scripts/common.sh@367 -- # return 0 00:14:21.875 02:31:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.875 02:31:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.875 --rc genhtml_branch_coverage=1 00:14:21.875 --rc genhtml_function_coverage=1 00:14:21.875 --rc genhtml_legend=1 00:14:21.875 --rc geninfo_all_blocks=1 00:14:21.875 --rc geninfo_unexecuted_blocks=1 00:14:21.875 00:14:21.875 ' 00:14:21.875 02:31:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.875 --rc genhtml_branch_coverage=1 00:14:21.875 --rc genhtml_function_coverage=1 00:14:21.875 --rc genhtml_legend=1 00:14:21.875 --rc geninfo_all_blocks=1 00:14:21.875 --rc geninfo_unexecuted_blocks=1 00:14:21.875 00:14:21.875 ' 00:14:21.875 02:31:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.875 --rc genhtml_branch_coverage=1 00:14:21.875 --rc genhtml_function_coverage=1 00:14:21.875 --rc genhtml_legend=1 00:14:21.875 --rc geninfo_all_blocks=1 00:14:21.875 --rc geninfo_unexecuted_blocks=1 00:14:21.875 00:14:21.875 ' 00:14:21.875 02:31:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.875 --rc genhtml_branch_coverage=1 00:14:21.875 --rc genhtml_function_coverage=1 00:14:21.875 --rc genhtml_legend=1 00:14:21.875 --rc geninfo_all_blocks=1 00:14:21.875 --rc geninfo_unexecuted_blocks=1 00:14:21.875 00:14:21.875 ' 00:14:21.875 02:31:02 -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.875 02:31:02 -- nvmf/common.sh@7 -- # uname -s 00:14:21.875 02:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.875 02:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.875 02:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.875 02:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.875 02:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.875 02:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.875 02:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.875 02:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.875 02:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.875 02:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.875 02:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:21.875 02:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:21.875 02:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.875 02:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.875 02:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.875 02:31:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.875 02:31:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.875 02:31:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.875 02:31:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.875 02:31:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.875 02:31:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.875 02:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.875 02:31:02 -- paths/export.sh@5 -- # export PATH 00:14:21.875 02:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.875 02:31:02 -- nvmf/common.sh@46 -- # : 0 00:14:21.875 02:31:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:21.875 02:31:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:21.875 02:31:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:21.875 02:31:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.875 02:31:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.875 02:31:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:21.875 02:31:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:21.875 02:31:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:21.875 02:31:02 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:21.875 02:31:02 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:21.875 02:31:02 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:21.875 02:31:02 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:21.875 02:31:02 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:21.875 02:31:02 -- compliance/compliance.sh@20 -- # nvmfpid=71742 00:14:21.875 Process pid: 71742 00:14:21.875 02:31:02 -- compliance/compliance.sh@21 -- # echo 'Process pid: 71742' 00:14:21.875 02:31:02 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:21.875 02:31:02 -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:21.875 02:31:02 -- compliance/compliance.sh@24 -- # waitforlisten 71742 00:14:21.875 02:31:02 -- common/autotest_common.sh@829 -- # '[' -z 71742 ']' 00:14:21.875 02:31:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.875 02:31:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.875 02:31:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.875 02:31:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.875 02:31:02 -- common/autotest_common.sh@10 -- # set +x 00:14:22.134 [2024-11-21 02:31:02.524280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:22.134 [2024-11-21 02:31:02.524403] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.134 [2024-11-21 02:31:02.663843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:22.134 [2024-11-21 02:31:02.777776] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:22.134 [2024-11-21 02:31:02.777998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.134 [2024-11-21 02:31:02.778011] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.134 [2024-11-21 02:31:02.778020] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.134 [2024-11-21 02:31:02.778395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.134 [2024-11-21 02:31:02.778233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.134 [2024-11-21 02:31:02.778385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.068 02:31:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.068 02:31:03 -- common/autotest_common.sh@862 -- # return 0 00:14:23.068 02:31:03 -- compliance/compliance.sh@26 -- # sleep 1 00:14:24.002 02:31:04 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:24.002 02:31:04 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:24.002 02:31:04 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:24.002 02:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.002 02:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 02:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.002 02:31:04 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:24.002 02:31:04 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:24.002 02:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.002 02:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 malloc0 00:14:24.002 02:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.002 02:31:04 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:24.002 02:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.002 02:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 02:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.002 02:31:04 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:24.002 02:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.002 02:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 02:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.002 02:31:04 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:24.002 02:31:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.002 02:31:04 -- common/autotest_common.sh@10 -- # set +x 00:14:24.002 02:31:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.002 02:31:04 -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:24.260 00:14:24.260 00:14:24.260 CUnit - A unit testing framework for C - Version 2.1-3 00:14:24.260 http://cunit.sourceforge.net/ 00:14:24.260 00:14:24.260 00:14:24.260 Suite: nvme_compliance 00:14:24.260 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-21 02:31:04.814039] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:24.260 [2024-11-21 02:31:04.814122] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:24.260 [2024-11-21 02:31:04.814133] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:24.260 passed 00:14:24.517 Test: admin_identify_ctrlr_verify_fused ...passed 00:14:24.517 Test: admin_identify_ns ...[2024-11-21 02:31:05.056825] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:24.517 [2024-11-21 02:31:05.063829] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:24.517 passed 00:14:24.775 Test: admin_get_features_mandatory_features ...passed 00:14:24.775 Test: admin_get_features_optional_features ...passed 00:14:25.032 Test: admin_set_features_number_of_queues ...passed 00:14:25.032 Test: admin_get_log_page_mandatory_logs ...passed 00:14:25.290 Test: admin_get_log_page_with_lpo ...[2024-11-21 02:31:05.700766] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:25.290 passed 00:14:25.290 Test: fabric_property_get ...passed 00:14:25.290 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-21 02:31:05.890976] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:25.290 passed 00:14:25.549 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-21 02:31:06.062800] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:25.549 [2024-11-21 02:31:06.078825] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:25.549 passed 00:14:25.549 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-21 02:31:06.174767] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:25.808 passed 00:14:25.808 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-21 02:31:06.334814] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:25.808 [2024-11-21 02:31:06.358818] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:25.808 passed 00:14:25.808 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-21 02:31:06.450965] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:25.808 [2024-11-21 02:31:06.451064] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:26.066 passed 00:14:26.066 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-21 02:31:06.633780] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:26.066 [2024-11-21 02:31:06.641779] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:26.066 [2024-11-21 02:31:06.649777] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:26.066 [2024-11-21 02:31:06.657768] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:26.325 passed 00:14:26.325 Test: admin_create_io_sq_verify_pc ...[2024-11-21 02:31:06.789792] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:26.325 passed 00:14:27.701 Test: admin_create_io_qp_max_qps ...[2024-11-21 02:31:07.966786] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:27.960 passed 00:14:27.960 Test: admin_create_io_sq_shared_cq ...[2024-11-21 02:31:08.562785] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:28.219 passed 00:14:28.219 00:14:28.219 Run Summary: Type Total Ran Passed Failed Inactive 00:14:28.219 suites 1 1 n/a 0 0 00:14:28.219 tests 18 18 18 0 0 00:14:28.219 asserts 360 360 360 0 n/a 00:14:28.219 00:14:28.219 Elapsed time = 1.567 seconds 00:14:28.219 02:31:08 -- compliance/compliance.sh@42 -- # killprocess 71742 00:14:28.219 02:31:08 -- common/autotest_common.sh@936 -- # '[' -z 71742 ']' 00:14:28.219 02:31:08 -- common/autotest_common.sh@940 -- # kill -0 71742 00:14:28.219 02:31:08 -- common/autotest_common.sh@941 -- # uname 00:14:28.219 02:31:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.219 02:31:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71742 00:14:28.219 02:31:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:28.219 02:31:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:28.219 killing process with pid 71742 00:14:28.219 02:31:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71742' 00:14:28.219 02:31:08 -- common/autotest_common.sh@955 -- # kill 71742 00:14:28.219 02:31:08 -- common/autotest_common.sh@960 -- # wait 71742 00:14:28.478 02:31:08 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:28.478 02:31:08 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:28.478 00:14:28.478 real 0m6.725s 00:14:28.478 user 0m18.649s 00:14:28.478 sys 0m0.553s 00:14:28.479 02:31:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:28.479 02:31:08 -- common/autotest_common.sh@10 -- # set +x 00:14:28.479 ************************************ 00:14:28.479 END TEST nvmf_vfio_user_nvme_compliance 00:14:28.479 ************************************ 00:14:28.479 02:31:09 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:28.479 02:31:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.479 02:31:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.479 02:31:09 -- common/autotest_common.sh@10 -- # set +x 00:14:28.479 ************************************ 00:14:28.479 START TEST nvmf_vfio_user_fuzz 00:14:28.479 ************************************ 00:14:28.479 02:31:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:28.479 * Looking for test storage... 00:14:28.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:28.479 02:31:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:28.479 02:31:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:28.479 02:31:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:28.740 02:31:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:28.740 02:31:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:28.740 02:31:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:28.741 02:31:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:28.741 02:31:09 -- scripts/common.sh@335 -- # IFS=.-: 00:14:28.741 02:31:09 -- scripts/common.sh@335 -- # read -ra ver1 00:14:28.741 02:31:09 -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.741 02:31:09 -- scripts/common.sh@336 -- # read -ra ver2 00:14:28.741 02:31:09 -- scripts/common.sh@337 -- # local 'op=<' 00:14:28.741 02:31:09 -- scripts/common.sh@339 -- # ver1_l=2 00:14:28.741 02:31:09 -- scripts/common.sh@340 -- # ver2_l=1 00:14:28.741 02:31:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:28.741 02:31:09 -- scripts/common.sh@343 -- # case "$op" in 00:14:28.741 02:31:09 -- scripts/common.sh@344 -- # : 1 00:14:28.741 02:31:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:28.741 02:31:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.741 02:31:09 -- scripts/common.sh@364 -- # decimal 1 00:14:28.741 02:31:09 -- scripts/common.sh@352 -- # local d=1 00:14:28.741 02:31:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.741 02:31:09 -- scripts/common.sh@354 -- # echo 1 00:14:28.741 02:31:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:28.741 02:31:09 -- scripts/common.sh@365 -- # decimal 2 00:14:28.741 02:31:09 -- scripts/common.sh@352 -- # local d=2 00:14:28.741 02:31:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.741 02:31:09 -- scripts/common.sh@354 -- # echo 2 00:14:28.741 02:31:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:28.741 02:31:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:28.741 02:31:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:28.741 02:31:09 -- scripts/common.sh@367 -- # return 0 00:14:28.741 02:31:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.741 02:31:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.741 --rc genhtml_branch_coverage=1 00:14:28.741 --rc genhtml_function_coverage=1 00:14:28.741 --rc genhtml_legend=1 00:14:28.741 --rc geninfo_all_blocks=1 00:14:28.741 --rc geninfo_unexecuted_blocks=1 00:14:28.741 00:14:28.741 ' 00:14:28.741 02:31:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.741 --rc genhtml_branch_coverage=1 00:14:28.741 --rc genhtml_function_coverage=1 00:14:28.741 --rc genhtml_legend=1 00:14:28.741 --rc geninfo_all_blocks=1 00:14:28.741 --rc geninfo_unexecuted_blocks=1 00:14:28.741 00:14:28.741 ' 00:14:28.741 02:31:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.741 --rc genhtml_branch_coverage=1 00:14:28.741 --rc genhtml_function_coverage=1 00:14:28.741 --rc genhtml_legend=1 00:14:28.741 --rc geninfo_all_blocks=1 00:14:28.741 --rc geninfo_unexecuted_blocks=1 00:14:28.741 00:14:28.741 ' 00:14:28.741 02:31:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:28.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.741 --rc genhtml_branch_coverage=1 00:14:28.741 --rc genhtml_function_coverage=1 00:14:28.741 --rc genhtml_legend=1 00:14:28.741 --rc geninfo_all_blocks=1 00:14:28.741 --rc geninfo_unexecuted_blocks=1 00:14:28.741 00:14:28.741 ' 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.741 02:31:09 -- nvmf/common.sh@7 -- # uname -s 00:14:28.741 02:31:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.741 02:31:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.741 02:31:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.741 02:31:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.741 02:31:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.741 02:31:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.741 02:31:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.741 02:31:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.741 02:31:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.741 02:31:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.741 02:31:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:28.741 02:31:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:28.741 02:31:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.741 02:31:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.741 02:31:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:28.741 02:31:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.741 02:31:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.741 02:31:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.741 02:31:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.741 02:31:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.741 02:31:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.741 02:31:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.741 02:31:09 -- paths/export.sh@5 -- # export PATH 00:14:28.741 02:31:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.741 02:31:09 -- nvmf/common.sh@46 -- # : 0 00:14:28.741 02:31:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.741 02:31:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.741 02:31:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.741 02:31:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.741 02:31:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.741 02:31:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.741 02:31:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.741 02:31:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=71902 00:14:28.741 Process pid: 71902 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 71902' 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 71902 00:14:28.741 02:31:09 -- common/autotest_common.sh@829 -- # '[' -z 71902 ']' 00:14:28.741 02:31:09 -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:28.741 02:31:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.741 02:31:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.741 02:31:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.741 02:31:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.741 02:31:09 -- common/autotest_common.sh@10 -- # set +x 00:14:29.676 02:31:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.676 02:31:10 -- common/autotest_common.sh@862 -- # return 0 00:14:29.676 02:31:10 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:31.051 02:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.051 02:31:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.051 02:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:31.051 02:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.051 02:31:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.051 malloc0 00:14:31.051 02:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:31.051 02:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.051 02:31:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.051 02:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:31.051 02:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.051 02:31:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.051 02:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:31.051 02:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.051 02:31:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.051 02:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:31.051 02:31:11 -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:31.310 Shutting down the fuzz application 00:14:31.311 02:31:11 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:31.311 02:31:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.311 02:31:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.311 02:31:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.311 02:31:11 -- target/vfio_user_fuzz.sh@46 -- # killprocess 71902 00:14:31.311 02:31:11 -- common/autotest_common.sh@936 -- # '[' -z 71902 ']' 00:14:31.311 02:31:11 -- common/autotest_common.sh@940 -- # kill -0 71902 00:14:31.311 02:31:11 -- common/autotest_common.sh@941 -- # uname 00:14:31.311 02:31:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.311 02:31:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71902 00:14:31.311 02:31:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.311 02:31:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.311 killing process with pid 71902 00:14:31.311 02:31:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71902' 00:14:31.311 02:31:11 -- common/autotest_common.sh@955 -- # kill 71902 00:14:31.311 02:31:11 -- common/autotest_common.sh@960 -- # wait 71902 00:14:31.569 02:31:12 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:31.569 02:31:12 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:31.569 00:14:31.569 real 0m3.161s 00:14:31.569 user 0m3.558s 00:14:31.569 sys 0m0.443s 00:14:31.569 02:31:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.569 ************************************ 00:14:31.569 END TEST nvmf_vfio_user_fuzz 00:14:31.569 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:14:31.569 ************************************ 00:14:31.827 02:31:12 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:31.827 02:31:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:31.827 02:31:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.827 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:14:31.827 ************************************ 00:14:31.827 START TEST nvmf_host_management 00:14:31.827 ************************************ 00:14:31.827 02:31:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:31.827 * Looking for test storage... 00:14:31.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.827 02:31:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:31.827 02:31:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:31.827 02:31:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:31.827 02:31:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:31.827 02:31:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:31.827 02:31:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:31.827 02:31:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:31.827 02:31:12 -- scripts/common.sh@335 -- # IFS=.-: 00:14:31.827 02:31:12 -- scripts/common.sh@335 -- # read -ra ver1 00:14:31.827 02:31:12 -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.827 02:31:12 -- scripts/common.sh@336 -- # read -ra ver2 00:14:31.827 02:31:12 -- scripts/common.sh@337 -- # local 'op=<' 00:14:31.827 02:31:12 -- scripts/common.sh@339 -- # ver1_l=2 00:14:31.827 02:31:12 -- scripts/common.sh@340 -- # ver2_l=1 00:14:31.828 02:31:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:31.828 02:31:12 -- scripts/common.sh@343 -- # case "$op" in 00:14:31.828 02:31:12 -- scripts/common.sh@344 -- # : 1 00:14:31.828 02:31:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:31.828 02:31:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.828 02:31:12 -- scripts/common.sh@364 -- # decimal 1 00:14:31.828 02:31:12 -- scripts/common.sh@352 -- # local d=1 00:14:31.828 02:31:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.828 02:31:12 -- scripts/common.sh@354 -- # echo 1 00:14:31.828 02:31:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:31.828 02:31:12 -- scripts/common.sh@365 -- # decimal 2 00:14:31.828 02:31:12 -- scripts/common.sh@352 -- # local d=2 00:14:31.828 02:31:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.828 02:31:12 -- scripts/common.sh@354 -- # echo 2 00:14:31.828 02:31:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:31.828 02:31:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:31.828 02:31:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:31.828 02:31:12 -- scripts/common.sh@367 -- # return 0 00:14:31.828 02:31:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.828 02:31:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.828 --rc genhtml_branch_coverage=1 00:14:31.828 --rc genhtml_function_coverage=1 00:14:31.828 --rc genhtml_legend=1 00:14:31.828 --rc geninfo_all_blocks=1 00:14:31.828 --rc geninfo_unexecuted_blocks=1 00:14:31.828 00:14:31.828 ' 00:14:31.828 02:31:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.828 --rc genhtml_branch_coverage=1 00:14:31.828 --rc genhtml_function_coverage=1 00:14:31.828 --rc genhtml_legend=1 00:14:31.828 --rc geninfo_all_blocks=1 00:14:31.828 --rc geninfo_unexecuted_blocks=1 00:14:31.828 00:14:31.828 ' 00:14:31.828 02:31:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.828 --rc genhtml_branch_coverage=1 00:14:31.828 --rc genhtml_function_coverage=1 00:14:31.828 --rc genhtml_legend=1 00:14:31.828 --rc geninfo_all_blocks=1 00:14:31.828 --rc geninfo_unexecuted_blocks=1 00:14:31.828 00:14:31.828 ' 00:14:31.828 02:31:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.828 --rc genhtml_branch_coverage=1 00:14:31.828 --rc genhtml_function_coverage=1 00:14:31.828 --rc genhtml_legend=1 00:14:31.828 --rc geninfo_all_blocks=1 00:14:31.828 --rc geninfo_unexecuted_blocks=1 00:14:31.828 00:14:31.828 ' 00:14:31.828 02:31:12 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.828 02:31:12 -- nvmf/common.sh@7 -- # uname -s 00:14:31.828 02:31:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.828 02:31:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.828 02:31:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.828 02:31:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.828 02:31:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.828 02:31:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.828 02:31:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.828 02:31:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.828 02:31:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.828 02:31:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.828 02:31:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:31.828 02:31:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:31.828 02:31:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.828 02:31:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.828 02:31:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.828 02:31:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.828 02:31:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.828 02:31:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.828 02:31:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.828 02:31:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.828 02:31:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.828 02:31:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.828 02:31:12 -- paths/export.sh@5 -- # export PATH 00:14:31.828 02:31:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.828 02:31:12 -- nvmf/common.sh@46 -- # : 0 00:14:31.828 02:31:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:31.828 02:31:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:31.828 02:31:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:31.828 02:31:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.828 02:31:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.828 02:31:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:31.828 02:31:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:31.828 02:31:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:31.828 02:31:12 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.828 02:31:12 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.828 02:31:12 -- target/host_management.sh@104 -- # nvmftestinit 00:14:31.828 02:31:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:31.828 02:31:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.828 02:31:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:31.828 02:31:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:31.828 02:31:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:31.828 02:31:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.828 02:31:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.828 02:31:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.828 02:31:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:31.828 02:31:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:31.828 02:31:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:31.828 02:31:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:31.828 02:31:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:31.828 02:31:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:31.828 02:31:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.828 02:31:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.828 02:31:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.828 02:31:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:31.828 02:31:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.828 02:31:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.828 02:31:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.828 02:31:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.828 02:31:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.828 02:31:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.828 02:31:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.828 02:31:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.828 02:31:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:32.087 02:31:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:32.087 Cannot find device "nvmf_tgt_br" 00:14:32.087 02:31:12 -- nvmf/common.sh@154 -- # true 00:14:32.087 02:31:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:32.087 Cannot find device "nvmf_tgt_br2" 00:14:32.087 02:31:12 -- nvmf/common.sh@155 -- # true 00:14:32.087 02:31:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:32.087 02:31:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:32.087 Cannot find device "nvmf_tgt_br" 00:14:32.087 02:31:12 -- nvmf/common.sh@157 -- # true 00:14:32.087 02:31:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:32.087 Cannot find device "nvmf_tgt_br2" 00:14:32.087 02:31:12 -- nvmf/common.sh@158 -- # true 00:14:32.087 02:31:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:32.087 02:31:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:32.087 02:31:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:32.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.087 02:31:12 -- nvmf/common.sh@161 -- # true 00:14:32.087 02:31:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:32.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:32.087 02:31:12 -- nvmf/common.sh@162 -- # true 00:14:32.087 02:31:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:32.087 02:31:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:32.087 02:31:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:32.087 02:31:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:32.087 02:31:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:32.087 02:31:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:32.087 02:31:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:32.087 02:31:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:32.087 02:31:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:32.087 02:31:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:32.087 02:31:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:32.345 02:31:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:32.345 02:31:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:32.345 02:31:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:32.345 02:31:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:32.345 02:31:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:32.345 02:31:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:32.345 02:31:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:32.345 02:31:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:32.345 02:31:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:32.345 02:31:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:32.345 02:31:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:32.345 02:31:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:32.345 02:31:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:32.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:32.345 00:14:32.345 --- 10.0.0.2 ping statistics --- 00:14:32.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.345 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:32.345 02:31:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:32.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:32.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:14:32.345 00:14:32.345 --- 10.0.0.3 ping statistics --- 00:14:32.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.345 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:32.345 02:31:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:32.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:14:32.345 00:14:32.345 --- 10.0.0.1 ping statistics --- 00:14:32.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.345 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:32.345 02:31:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.345 02:31:12 -- nvmf/common.sh@421 -- # return 0 00:14:32.345 02:31:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:32.345 02:31:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.345 02:31:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:32.345 02:31:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:32.345 02:31:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.345 02:31:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:32.345 02:31:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:32.345 02:31:12 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:32.345 02:31:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.345 02:31:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.345 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 ************************************ 00:14:32.346 START TEST nvmf_host_management 00:14:32.346 ************************************ 00:14:32.346 02:31:12 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:32.346 02:31:12 -- target/host_management.sh@69 -- # starttarget 00:14:32.346 02:31:12 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:32.346 02:31:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:32.346 02:31:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.346 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 02:31:12 -- nvmf/common.sh@469 -- # nvmfpid=72136 00:14:32.346 02:31:12 -- nvmf/common.sh@470 -- # waitforlisten 72136 00:14:32.346 02:31:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:32.346 02:31:12 -- common/autotest_common.sh@829 -- # '[' -z 72136 ']' 00:14:32.346 02:31:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.346 02:31:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.346 02:31:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.346 02:31:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.346 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 [2024-11-21 02:31:12.925179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:32.346 [2024-11-21 02:31:12.925295] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.604 [2024-11-21 02:31:13.066782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.604 [2024-11-21 02:31:13.229082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.604 [2024-11-21 02:31:13.229296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.604 [2024-11-21 02:31:13.229321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.604 [2024-11-21 02:31:13.229333] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.604 [2024-11-21 02:31:13.229584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.604 [2024-11-21 02:31:13.230212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.604 [2024-11-21 02:31:13.231571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:32.604 [2024-11-21 02:31:13.231587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.538 02:31:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.538 02:31:13 -- common/autotest_common.sh@862 -- # return 0 00:14:33.538 02:31:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:33.538 02:31:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.538 02:31:13 -- common/autotest_common.sh@10 -- # set +x 00:14:33.538 02:31:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.538 02:31:13 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.538 02:31:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.538 02:31:13 -- common/autotest_common.sh@10 -- # set +x 00:14:33.538 [2024-11-21 02:31:14.000834] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.538 02:31:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.538 02:31:14 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:33.538 02:31:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.538 02:31:14 -- common/autotest_common.sh@10 -- # set +x 00:14:33.538 02:31:14 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:33.538 02:31:14 -- target/host_management.sh@23 -- # cat 00:14:33.538 02:31:14 -- target/host_management.sh@30 -- # rpc_cmd 00:14:33.538 02:31:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.538 02:31:14 -- common/autotest_common.sh@10 -- # set +x 00:14:33.538 Malloc0 00:14:33.538 [2024-11-21 02:31:14.082571] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.538 02:31:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.538 02:31:14 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:33.538 02:31:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.538 02:31:14 -- common/autotest_common.sh@10 -- # set +x 00:14:33.538 02:31:14 -- target/host_management.sh@73 -- # perfpid=72208 00:14:33.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.538 02:31:14 -- target/host_management.sh@74 -- # waitforlisten 72208 /var/tmp/bdevperf.sock 00:14:33.538 02:31:14 -- common/autotest_common.sh@829 -- # '[' -z 72208 ']' 00:14:33.538 02:31:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.538 02:31:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.538 02:31:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.538 02:31:14 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:33.538 02:31:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.538 02:31:14 -- common/autotest_common.sh@10 -- # set +x 00:14:33.538 02:31:14 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:33.538 02:31:14 -- nvmf/common.sh@520 -- # config=() 00:14:33.538 02:31:14 -- nvmf/common.sh@520 -- # local subsystem config 00:14:33.538 02:31:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:33.538 02:31:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:33.538 { 00:14:33.538 "params": { 00:14:33.538 "name": "Nvme$subsystem", 00:14:33.538 "trtype": "$TEST_TRANSPORT", 00:14:33.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.538 "adrfam": "ipv4", 00:14:33.538 "trsvcid": "$NVMF_PORT", 00:14:33.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.538 "hdgst": ${hdgst:-false}, 00:14:33.538 "ddgst": ${ddgst:-false} 00:14:33.538 }, 00:14:33.538 "method": "bdev_nvme_attach_controller" 00:14:33.538 } 00:14:33.538 EOF 00:14:33.538 )") 00:14:33.538 02:31:14 -- nvmf/common.sh@542 -- # cat 00:14:33.538 02:31:14 -- nvmf/common.sh@544 -- # jq . 00:14:33.538 02:31:14 -- nvmf/common.sh@545 -- # IFS=, 00:14:33.538 02:31:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:33.538 "params": { 00:14:33.538 "name": "Nvme0", 00:14:33.538 "trtype": "tcp", 00:14:33.538 "traddr": "10.0.0.2", 00:14:33.538 "adrfam": "ipv4", 00:14:33.538 "trsvcid": "4420", 00:14:33.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:33.538 "hdgst": false, 00:14:33.538 "ddgst": false 00:14:33.538 }, 00:14:33.538 "method": "bdev_nvme_attach_controller" 00:14:33.538 }' 00:14:33.796 [2024-11-21 02:31:14.191645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.796 [2024-11-21 02:31:14.191772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72208 ] 00:14:33.796 [2024-11-21 02:31:14.327015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.053 [2024-11-21 02:31:14.461914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.053 Running I/O for 10 seconds... 00:14:34.622 02:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.622 02:31:15 -- common/autotest_common.sh@862 -- # return 0 00:14:34.622 02:31:15 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:34.622 02:31:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.622 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:14:34.622 02:31:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.622 02:31:15 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.622 02:31:15 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:34.622 02:31:15 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:34.622 02:31:15 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:34.622 02:31:15 -- target/host_management.sh@52 -- # local ret=1 00:14:34.622 02:31:15 -- target/host_management.sh@53 -- # local i 00:14:34.622 02:31:15 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:34.622 02:31:15 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:34.622 02:31:15 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:34.622 02:31:15 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:34.622 02:31:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.622 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:14:34.622 02:31:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.622 02:31:15 -- target/host_management.sh@55 -- # read_io_count=1659 00:14:34.622 02:31:15 -- target/host_management.sh@58 -- # '[' 1659 -ge 100 ']' 00:14:34.622 02:31:15 -- target/host_management.sh@59 -- # ret=0 00:14:34.622 02:31:15 -- target/host_management.sh@60 -- # break 00:14:34.622 02:31:15 -- target/host_management.sh@64 -- # return 0 00:14:34.622 02:31:15 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:34.622 02:31:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.622 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:14:34.622 [2024-11-21 02:31:15.196349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196426] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196554] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.622 [2024-11-21 02:31:15.196653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196677] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196827] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.196843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4910 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.199799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.623 [2024-11-21 02:31:15.199830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.199844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.623 [2024-11-21 02:31:15.199853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.199862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.623 [2024-11-21 02:31:15.199871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.199882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.623 [2024-11-21 02:31:15.199890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.199899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1601dc0 is same with the state(5) to be set 00:14:34.623 [2024-11-21 02:31:15.199971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.199986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.623 [2024-11-21 02:31:15.200475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.623 [2024-11-21 02:31:15.200484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 02:31:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.624 [2024-11-21 02:31:15.200873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.200982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.200991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 02:31:15 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:34.624 [2024-11-21 02:31:15.201139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.624 [2024-11-21 02:31:15.201246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.624 [2024-11-21 02:31:15.201257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.625 [2024-11-21 02:31:15.201265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.625 task offset: 104704 on job bdev=Nvme0n1 fails 00:14:34.625 00:14:34.625 Latency(us) 00:14:34.625 [2024-11-21T02:31:15.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.625 [2024-11-21T02:31:15.272Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:34.625 [2024-11-21T02:31:15.272Z] Job: Nvme0n1 ended in about 0.57 seconds with error 00:14:34.625 Verification LBA range: start 0x0 length 0x400 00:14:34.625 Nvme0n1 : 0.57 3212.79 200.80 113.23 0.00 18887.41 1995.87 26571.87 00:14:34.625 [2024-11-21T02:31:15.272Z] =================================================================================================================== 00:14:34.625 [2024-11-21T02:31:15.272Z] Total : 3212.79 200.80 113.23 0.00 18887.41 1995.87 26571.87 00:14:34.625 [2024-11-21 02:31:15.201348] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15d5400 was disconnected and freed. reset controller. 00:14:34.625 02:31:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.625 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:14:34.625 [2024-11-21 02:31:15.202501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:34.625 [2024-11-21 02:31:15.204448] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:34.625 [2024-11-21 02:31:15.204470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1601dc0 (9): Bad file descriptor 00:14:34.625 02:31:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.625 02:31:15 -- target/host_management.sh@87 -- # sleep 1 00:14:34.625 [2024-11-21 02:31:15.215294] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:36.003 02:31:16 -- target/host_management.sh@91 -- # kill -9 72208 00:14:36.003 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72208) - No such process 00:14:36.003 02:31:16 -- target/host_management.sh@91 -- # true 00:14:36.003 02:31:16 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:36.003 02:31:16 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:36.003 02:31:16 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:36.003 02:31:16 -- nvmf/common.sh@520 -- # config=() 00:14:36.003 02:31:16 -- nvmf/common.sh@520 -- # local subsystem config 00:14:36.003 02:31:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:36.003 02:31:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:36.003 { 00:14:36.003 "params": { 00:14:36.003 "name": "Nvme$subsystem", 00:14:36.003 "trtype": "$TEST_TRANSPORT", 00:14:36.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.003 "adrfam": "ipv4", 00:14:36.003 "trsvcid": "$NVMF_PORT", 00:14:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.003 "hdgst": ${hdgst:-false}, 00:14:36.003 "ddgst": ${ddgst:-false} 00:14:36.003 }, 00:14:36.003 "method": "bdev_nvme_attach_controller" 00:14:36.003 } 00:14:36.003 EOF 00:14:36.003 )") 00:14:36.003 02:31:16 -- nvmf/common.sh@542 -- # cat 00:14:36.003 02:31:16 -- nvmf/common.sh@544 -- # jq . 00:14:36.003 02:31:16 -- nvmf/common.sh@545 -- # IFS=, 00:14:36.003 02:31:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:36.003 "params": { 00:14:36.003 "name": "Nvme0", 00:14:36.003 "trtype": "tcp", 00:14:36.003 "traddr": "10.0.0.2", 00:14:36.003 "adrfam": "ipv4", 00:14:36.003 "trsvcid": "4420", 00:14:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:36.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:36.003 "hdgst": false, 00:14:36.003 "ddgst": false 00:14:36.003 }, 00:14:36.003 "method": "bdev_nvme_attach_controller" 00:14:36.003 }' 00:14:36.003 [2024-11-21 02:31:16.275517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:36.003 [2024-11-21 02:31:16.275628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72258 ] 00:14:36.003 [2024-11-21 02:31:16.414882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.003 [2024-11-21 02:31:16.532355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.269 Running I/O for 1 seconds... 00:14:37.218 00:14:37.218 Latency(us) 00:14:37.218 [2024-11-21T02:31:17.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.218 [2024-11-21T02:31:17.865Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:37.218 Verification LBA range: start 0x0 length 0x400 00:14:37.218 Nvme0n1 : 1.01 3439.72 214.98 0.00 0.00 18276.91 1980.97 25261.15 00:14:37.218 [2024-11-21T02:31:17.865Z] =================================================================================================================== 00:14:37.218 [2024-11-21T02:31:17.865Z] Total : 3439.72 214.98 0.00 0.00 18276.91 1980.97 25261.15 00:14:37.476 02:31:17 -- target/host_management.sh@101 -- # stoptarget 00:14:37.476 02:31:17 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:37.476 02:31:17 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:37.476 02:31:17 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:37.476 02:31:17 -- target/host_management.sh@40 -- # nvmftestfini 00:14:37.476 02:31:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:37.476 02:31:17 -- nvmf/common.sh@116 -- # sync 00:14:37.476 02:31:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:37.476 02:31:18 -- nvmf/common.sh@119 -- # set +e 00:14:37.476 02:31:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:37.476 02:31:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:37.476 rmmod nvme_tcp 00:14:37.476 rmmod nvme_fabrics 00:14:37.476 rmmod nvme_keyring 00:14:37.476 02:31:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:37.476 02:31:18 -- nvmf/common.sh@123 -- # set -e 00:14:37.476 02:31:18 -- nvmf/common.sh@124 -- # return 0 00:14:37.476 02:31:18 -- nvmf/common.sh@477 -- # '[' -n 72136 ']' 00:14:37.476 02:31:18 -- nvmf/common.sh@478 -- # killprocess 72136 00:14:37.476 02:31:18 -- common/autotest_common.sh@936 -- # '[' -z 72136 ']' 00:14:37.476 02:31:18 -- common/autotest_common.sh@940 -- # kill -0 72136 00:14:37.476 02:31:18 -- common/autotest_common.sh@941 -- # uname 00:14:37.476 02:31:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.476 02:31:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72136 00:14:37.476 02:31:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:37.476 02:31:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:37.476 killing process with pid 72136 00:14:37.476 02:31:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72136' 00:14:37.476 02:31:18 -- common/autotest_common.sh@955 -- # kill 72136 00:14:37.477 02:31:18 -- common/autotest_common.sh@960 -- # wait 72136 00:14:38.044 [2024-11-21 02:31:18.464218] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:38.044 02:31:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:38.044 02:31:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:38.044 02:31:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:38.044 02:31:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.044 02:31:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:38.044 02:31:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.044 02:31:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.044 02:31:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.044 02:31:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:38.044 00:14:38.044 real 0m5.669s 00:14:38.044 user 0m23.340s 00:14:38.044 sys 0m1.266s 00:14:38.044 02:31:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.044 ************************************ 00:14:38.044 END TEST nvmf_host_management 00:14:38.044 ************************************ 00:14:38.044 02:31:18 -- common/autotest_common.sh@10 -- # set +x 00:14:38.044 02:31:18 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:38.044 00:14:38.044 real 0m6.327s 00:14:38.044 user 0m23.542s 00:14:38.044 sys 0m1.564s 00:14:38.044 02:31:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.044 02:31:18 -- common/autotest_common.sh@10 -- # set +x 00:14:38.044 ************************************ 00:14:38.044 END TEST nvmf_host_management 00:14:38.044 ************************************ 00:14:38.044 02:31:18 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:38.044 02:31:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:38.044 02:31:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.044 02:31:18 -- common/autotest_common.sh@10 -- # set +x 00:14:38.044 ************************************ 00:14:38.044 START TEST nvmf_lvol 00:14:38.044 ************************************ 00:14:38.044 02:31:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:38.303 * Looking for test storage... 00:14:38.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.303 02:31:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:38.303 02:31:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:38.303 02:31:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:38.303 02:31:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:38.303 02:31:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:38.303 02:31:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:38.303 02:31:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:38.303 02:31:18 -- scripts/common.sh@335 -- # IFS=.-: 00:14:38.303 02:31:18 -- scripts/common.sh@335 -- # read -ra ver1 00:14:38.303 02:31:18 -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.303 02:31:18 -- scripts/common.sh@336 -- # read -ra ver2 00:14:38.303 02:31:18 -- scripts/common.sh@337 -- # local 'op=<' 00:14:38.303 02:31:18 -- scripts/common.sh@339 -- # ver1_l=2 00:14:38.303 02:31:18 -- scripts/common.sh@340 -- # ver2_l=1 00:14:38.303 02:31:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:38.303 02:31:18 -- scripts/common.sh@343 -- # case "$op" in 00:14:38.303 02:31:18 -- scripts/common.sh@344 -- # : 1 00:14:38.303 02:31:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:38.303 02:31:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.303 02:31:18 -- scripts/common.sh@364 -- # decimal 1 00:14:38.303 02:31:18 -- scripts/common.sh@352 -- # local d=1 00:14:38.303 02:31:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.303 02:31:18 -- scripts/common.sh@354 -- # echo 1 00:14:38.303 02:31:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:38.303 02:31:18 -- scripts/common.sh@365 -- # decimal 2 00:14:38.303 02:31:18 -- scripts/common.sh@352 -- # local d=2 00:14:38.303 02:31:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.303 02:31:18 -- scripts/common.sh@354 -- # echo 2 00:14:38.303 02:31:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:38.303 02:31:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:38.303 02:31:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:38.303 02:31:18 -- scripts/common.sh@367 -- # return 0 00:14:38.303 02:31:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.303 02:31:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:38.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.303 --rc genhtml_branch_coverage=1 00:14:38.303 --rc genhtml_function_coverage=1 00:14:38.303 --rc genhtml_legend=1 00:14:38.304 --rc geninfo_all_blocks=1 00:14:38.304 --rc geninfo_unexecuted_blocks=1 00:14:38.304 00:14:38.304 ' 00:14:38.304 02:31:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.304 --rc genhtml_branch_coverage=1 00:14:38.304 --rc genhtml_function_coverage=1 00:14:38.304 --rc genhtml_legend=1 00:14:38.304 --rc geninfo_all_blocks=1 00:14:38.304 --rc geninfo_unexecuted_blocks=1 00:14:38.304 00:14:38.304 ' 00:14:38.304 02:31:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.304 --rc genhtml_branch_coverage=1 00:14:38.304 --rc genhtml_function_coverage=1 00:14:38.304 --rc genhtml_legend=1 00:14:38.304 --rc geninfo_all_blocks=1 00:14:38.304 --rc geninfo_unexecuted_blocks=1 00:14:38.304 00:14:38.304 ' 00:14:38.304 02:31:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:38.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.304 --rc genhtml_branch_coverage=1 00:14:38.304 --rc genhtml_function_coverage=1 00:14:38.304 --rc genhtml_legend=1 00:14:38.304 --rc geninfo_all_blocks=1 00:14:38.304 --rc geninfo_unexecuted_blocks=1 00:14:38.304 00:14:38.304 ' 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.304 02:31:18 -- nvmf/common.sh@7 -- # uname -s 00:14:38.304 02:31:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.304 02:31:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.304 02:31:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.304 02:31:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.304 02:31:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.304 02:31:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.304 02:31:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.304 02:31:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.304 02:31:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.304 02:31:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.304 02:31:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:38.304 02:31:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:38.304 02:31:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.304 02:31:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.304 02:31:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.304 02:31:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.304 02:31:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.304 02:31:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.304 02:31:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.304 02:31:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.304 02:31:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.304 02:31:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.304 02:31:18 -- paths/export.sh@5 -- # export PATH 00:14:38.304 02:31:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.304 02:31:18 -- nvmf/common.sh@46 -- # : 0 00:14:38.304 02:31:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:38.304 02:31:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:38.304 02:31:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:38.304 02:31:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.304 02:31:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.304 02:31:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:38.304 02:31:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:38.304 02:31:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.304 02:31:18 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:38.304 02:31:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:38.304 02:31:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.304 02:31:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:38.304 02:31:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:38.304 02:31:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:38.304 02:31:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.304 02:31:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.304 02:31:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.304 02:31:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:38.304 02:31:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:38.304 02:31:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:38.304 02:31:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:38.304 02:31:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:38.304 02:31:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:38.304 02:31:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.304 02:31:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.304 02:31:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:38.304 02:31:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:38.304 02:31:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.304 02:31:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.304 02:31:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.304 02:31:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.304 02:31:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.304 02:31:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.304 02:31:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.304 02:31:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.304 02:31:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:38.304 02:31:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:38.304 Cannot find device "nvmf_tgt_br" 00:14:38.304 02:31:18 -- nvmf/common.sh@154 -- # true 00:14:38.304 02:31:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.304 Cannot find device "nvmf_tgt_br2" 00:14:38.304 02:31:18 -- nvmf/common.sh@155 -- # true 00:14:38.304 02:31:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:38.304 02:31:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:38.304 Cannot find device "nvmf_tgt_br" 00:14:38.304 02:31:18 -- nvmf/common.sh@157 -- # true 00:14:38.304 02:31:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:38.304 Cannot find device "nvmf_tgt_br2" 00:14:38.304 02:31:18 -- nvmf/common.sh@158 -- # true 00:14:38.304 02:31:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:38.304 02:31:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:38.304 02:31:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.563 02:31:18 -- nvmf/common.sh@161 -- # true 00:14:38.563 02:31:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.563 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.563 02:31:18 -- nvmf/common.sh@162 -- # true 00:14:38.563 02:31:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.563 02:31:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.563 02:31:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.563 02:31:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.563 02:31:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.563 02:31:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.563 02:31:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.563 02:31:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.563 02:31:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.563 02:31:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:38.563 02:31:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:38.563 02:31:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:38.563 02:31:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:38.563 02:31:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.563 02:31:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.563 02:31:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.563 02:31:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:38.563 02:31:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:38.563 02:31:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.563 02:31:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.563 02:31:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.564 02:31:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.564 02:31:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.564 02:31:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:38.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:38.564 00:14:38.564 --- 10.0.0.2 ping statistics --- 00:14:38.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.564 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:38.564 02:31:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:38.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:14:38.564 00:14:38.564 --- 10.0.0.3 ping statistics --- 00:14:38.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.564 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:38.564 02:31:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:38.564 00:14:38.564 --- 10.0.0.1 ping statistics --- 00:14:38.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.564 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:38.564 02:31:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.564 02:31:19 -- nvmf/common.sh@421 -- # return 0 00:14:38.564 02:31:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:38.564 02:31:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.564 02:31:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:38.564 02:31:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:38.564 02:31:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.564 02:31:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:38.564 02:31:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:38.564 02:31:19 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:38.564 02:31:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:38.564 02:31:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.564 02:31:19 -- common/autotest_common.sh@10 -- # set +x 00:14:38.564 02:31:19 -- nvmf/common.sh@469 -- # nvmfpid=72493 00:14:38.564 02:31:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:38.564 02:31:19 -- nvmf/common.sh@470 -- # waitforlisten 72493 00:14:38.564 02:31:19 -- common/autotest_common.sh@829 -- # '[' -z 72493 ']' 00:14:38.564 02:31:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.564 02:31:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.564 02:31:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.564 02:31:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.564 02:31:19 -- common/autotest_common.sh@10 -- # set +x 00:14:38.564 [2024-11-21 02:31:19.203849] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:38.564 [2024-11-21 02:31:19.203953] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.822 [2024-11-21 02:31:19.344441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.081 [2024-11-21 02:31:19.477001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:39.081 [2024-11-21 02:31:19.477196] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.081 [2024-11-21 02:31:19.477218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.081 [2024-11-21 02:31:19.477231] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.081 [2024-11-21 02:31:19.477458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.081 [2024-11-21 02:31:19.477623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.081 [2024-11-21 02:31:19.477636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.648 02:31:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:39.648 02:31:20 -- common/autotest_common.sh@862 -- # return 0 00:14:39.648 02:31:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:39.648 02:31:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:39.648 02:31:20 -- common/autotest_common.sh@10 -- # set +x 00:14:39.648 02:31:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.648 02:31:20 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:39.907 [2024-11-21 02:31:20.546278] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.166 02:31:20 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.426 02:31:20 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:40.426 02:31:20 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:40.685 02:31:21 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:40.685 02:31:21 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:40.943 02:31:21 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:41.202 02:31:21 -- target/nvmf_lvol.sh@29 -- # lvs=3efbff8c-ad9a-4ab7-9039-f9b61a0b5881 00:14:41.202 02:31:21 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3efbff8c-ad9a-4ab7-9039-f9b61a0b5881 lvol 20 00:14:41.460 02:31:21 -- target/nvmf_lvol.sh@32 -- # lvol=1f60f227-e430-4e1a-a432-2664a0d49a59 00:14:41.460 02:31:21 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:41.719 02:31:22 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f60f227-e430-4e1a-a432-2664a0d49a59 00:14:41.977 02:31:22 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:42.235 [2024-11-21 02:31:22.649259] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.235 02:31:22 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.494 02:31:22 -- target/nvmf_lvol.sh@42 -- # perf_pid=72644 00:14:42.494 02:31:22 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:42.494 02:31:22 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:43.429 02:31:23 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1f60f227-e430-4e1a-a432-2664a0d49a59 MY_SNAPSHOT 00:14:43.687 02:31:24 -- target/nvmf_lvol.sh@47 -- # snapshot=b7012e89-8e55-48c1-abda-f95dc0aa04df 00:14:43.687 02:31:24 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1f60f227-e430-4e1a-a432-2664a0d49a59 30 00:14:44.254 02:31:24 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b7012e89-8e55-48c1-abda-f95dc0aa04df MY_CLONE 00:14:44.513 02:31:24 -- target/nvmf_lvol.sh@49 -- # clone=e19466b8-5500-4e5d-9886-d70220150805 00:14:44.513 02:31:24 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e19466b8-5500-4e5d-9886-d70220150805 00:14:45.448 02:31:25 -- target/nvmf_lvol.sh@53 -- # wait 72644 00:14:53.605 Initializing NVMe Controllers 00:14:53.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:53.605 Controller IO queue size 128, less than required. 00:14:53.605 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:53.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:53.605 Initialization complete. Launching workers. 00:14:53.605 ======================================================== 00:14:53.605 Latency(us) 00:14:53.605 Device Information : IOPS MiB/s Average min max 00:14:53.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8795.10 34.36 14559.08 1362.41 100596.21 00:14:53.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8273.00 32.32 15479.80 3627.17 93331.29 00:14:53.605 ======================================================== 00:14:53.605 Total : 17068.09 66.67 15005.36 1362.41 100596.21 00:14:53.605 00:14:53.605 02:31:33 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:53.605 02:31:33 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1f60f227-e430-4e1a-a432-2664a0d49a59 00:14:53.605 02:31:33 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3efbff8c-ad9a-4ab7-9039-f9b61a0b5881 00:14:53.605 02:31:33 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:53.605 02:31:33 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:53.605 02:31:33 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:53.605 02:31:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.605 02:31:33 -- nvmf/common.sh@116 -- # sync 00:14:53.605 02:31:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.605 02:31:33 -- nvmf/common.sh@119 -- # set +e 00:14:53.605 02:31:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.605 02:31:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.605 rmmod nvme_tcp 00:14:53.605 rmmod nvme_fabrics 00:14:53.605 rmmod nvme_keyring 00:14:53.605 02:31:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.605 02:31:34 -- nvmf/common.sh@123 -- # set -e 00:14:53.605 02:31:34 -- nvmf/common.sh@124 -- # return 0 00:14:53.605 02:31:34 -- nvmf/common.sh@477 -- # '[' -n 72493 ']' 00:14:53.605 02:31:34 -- nvmf/common.sh@478 -- # killprocess 72493 00:14:53.605 02:31:34 -- common/autotest_common.sh@936 -- # '[' -z 72493 ']' 00:14:53.605 02:31:34 -- common/autotest_common.sh@940 -- # kill -0 72493 00:14:53.605 02:31:34 -- common/autotest_common.sh@941 -- # uname 00:14:53.605 02:31:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.605 02:31:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72493 00:14:53.605 killing process with pid 72493 00:14:53.605 02:31:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:53.605 02:31:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:53.605 02:31:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72493' 00:14:53.605 02:31:34 -- common/autotest_common.sh@955 -- # kill 72493 00:14:53.605 02:31:34 -- common/autotest_common.sh@960 -- # wait 72493 00:14:53.864 02:31:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.864 02:31:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.864 02:31:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.864 02:31:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.864 02:31:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.864 02:31:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.864 02:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.864 02:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.864 02:31:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:53.864 00:14:53.864 real 0m15.779s 00:14:53.864 user 1m6.120s 00:14:53.864 sys 0m3.609s 00:14:53.864 02:31:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.864 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:14:53.864 ************************************ 00:14:53.864 END TEST nvmf_lvol 00:14:53.864 ************************************ 00:14:53.864 02:31:34 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:53.864 02:31:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.864 02:31:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.864 02:31:34 -- common/autotest_common.sh@10 -- # set +x 00:14:53.864 ************************************ 00:14:53.864 START TEST nvmf_lvs_grow 00:14:53.864 ************************************ 00:14:53.864 02:31:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:54.124 * Looking for test storage... 00:14:54.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:54.124 02:31:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:54.124 02:31:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:54.124 02:31:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:54.124 02:31:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:54.124 02:31:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:54.124 02:31:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:54.124 02:31:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:54.124 02:31:34 -- scripts/common.sh@335 -- # IFS=.-: 00:14:54.124 02:31:34 -- scripts/common.sh@335 -- # read -ra ver1 00:14:54.124 02:31:34 -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.124 02:31:34 -- scripts/common.sh@336 -- # read -ra ver2 00:14:54.124 02:31:34 -- scripts/common.sh@337 -- # local 'op=<' 00:14:54.124 02:31:34 -- scripts/common.sh@339 -- # ver1_l=2 00:14:54.124 02:31:34 -- scripts/common.sh@340 -- # ver2_l=1 00:14:54.124 02:31:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:54.124 02:31:34 -- scripts/common.sh@343 -- # case "$op" in 00:14:54.124 02:31:34 -- scripts/common.sh@344 -- # : 1 00:14:54.124 02:31:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:54.124 02:31:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.124 02:31:34 -- scripts/common.sh@364 -- # decimal 1 00:14:54.124 02:31:34 -- scripts/common.sh@352 -- # local d=1 00:14:54.124 02:31:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.124 02:31:34 -- scripts/common.sh@354 -- # echo 1 00:14:54.124 02:31:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:54.124 02:31:34 -- scripts/common.sh@365 -- # decimal 2 00:14:54.124 02:31:34 -- scripts/common.sh@352 -- # local d=2 00:14:54.124 02:31:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.124 02:31:34 -- scripts/common.sh@354 -- # echo 2 00:14:54.124 02:31:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:54.124 02:31:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:54.124 02:31:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:54.124 02:31:34 -- scripts/common.sh@367 -- # return 0 00:14:54.124 02:31:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.124 02:31:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:54.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.124 --rc genhtml_branch_coverage=1 00:14:54.124 --rc genhtml_function_coverage=1 00:14:54.124 --rc genhtml_legend=1 00:14:54.124 --rc geninfo_all_blocks=1 00:14:54.124 --rc geninfo_unexecuted_blocks=1 00:14:54.124 00:14:54.124 ' 00:14:54.124 02:31:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:54.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.124 --rc genhtml_branch_coverage=1 00:14:54.124 --rc genhtml_function_coverage=1 00:14:54.124 --rc genhtml_legend=1 00:14:54.124 --rc geninfo_all_blocks=1 00:14:54.124 --rc geninfo_unexecuted_blocks=1 00:14:54.124 00:14:54.124 ' 00:14:54.124 02:31:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:54.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.124 --rc genhtml_branch_coverage=1 00:14:54.124 --rc genhtml_function_coverage=1 00:14:54.124 --rc genhtml_legend=1 00:14:54.124 --rc geninfo_all_blocks=1 00:14:54.124 --rc geninfo_unexecuted_blocks=1 00:14:54.124 00:14:54.124 ' 00:14:54.124 02:31:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:54.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.124 --rc genhtml_branch_coverage=1 00:14:54.124 --rc genhtml_function_coverage=1 00:14:54.124 --rc genhtml_legend=1 00:14:54.124 --rc geninfo_all_blocks=1 00:14:54.124 --rc geninfo_unexecuted_blocks=1 00:14:54.124 00:14:54.124 ' 00:14:54.124 02:31:34 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.124 02:31:34 -- nvmf/common.sh@7 -- # uname -s 00:14:54.124 02:31:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.124 02:31:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.124 02:31:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.124 02:31:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.124 02:31:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.124 02:31:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.124 02:31:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.124 02:31:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.124 02:31:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.124 02:31:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.124 02:31:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:54.124 02:31:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:14:54.124 02:31:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.124 02:31:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.124 02:31:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.124 02:31:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.124 02:31:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.124 02:31:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.124 02:31:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.124 02:31:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.124 02:31:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.124 02:31:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.124 02:31:34 -- paths/export.sh@5 -- # export PATH 00:14:54.124 02:31:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.124 02:31:34 -- nvmf/common.sh@46 -- # : 0 00:14:54.124 02:31:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:54.124 02:31:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:54.124 02:31:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:54.124 02:31:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.124 02:31:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.124 02:31:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:54.124 02:31:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:54.124 02:31:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:54.124 02:31:34 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.124 02:31:34 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.124 02:31:34 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:54.124 02:31:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:54.124 02:31:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.124 02:31:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:54.124 02:31:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:54.124 02:31:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:54.124 02:31:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.124 02:31:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.124 02:31:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.124 02:31:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:54.124 02:31:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:54.124 02:31:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:54.124 02:31:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:54.124 02:31:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:54.124 02:31:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:54.124 02:31:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.124 02:31:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.124 02:31:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:54.124 02:31:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:54.124 02:31:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.124 02:31:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.124 02:31:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.124 02:31:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.124 02:31:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.124 02:31:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.124 02:31:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.124 02:31:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.124 02:31:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:54.124 02:31:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:54.124 Cannot find device "nvmf_tgt_br" 00:14:54.124 02:31:34 -- nvmf/common.sh@154 -- # true 00:14:54.124 02:31:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.124 Cannot find device "nvmf_tgt_br2" 00:14:54.124 02:31:34 -- nvmf/common.sh@155 -- # true 00:14:54.124 02:31:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:54.125 02:31:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:54.125 Cannot find device "nvmf_tgt_br" 00:14:54.125 02:31:34 -- nvmf/common.sh@157 -- # true 00:14:54.125 02:31:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:54.125 Cannot find device "nvmf_tgt_br2" 00:14:54.125 02:31:34 -- nvmf/common.sh@158 -- # true 00:14:54.125 02:31:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:54.125 02:31:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:54.383 02:31:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.383 02:31:34 -- nvmf/common.sh@161 -- # true 00:14:54.383 02:31:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.383 02:31:34 -- nvmf/common.sh@162 -- # true 00:14:54.383 02:31:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.383 02:31:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.383 02:31:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.383 02:31:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.383 02:31:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.383 02:31:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.383 02:31:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.383 02:31:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.384 02:31:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.384 02:31:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:54.384 02:31:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:54.384 02:31:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:54.384 02:31:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:54.384 02:31:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.384 02:31:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.384 02:31:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.384 02:31:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:54.384 02:31:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:54.384 02:31:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.384 02:31:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.384 02:31:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.384 02:31:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.384 02:31:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.384 02:31:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:54.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:54.384 00:14:54.384 --- 10.0.0.2 ping statistics --- 00:14:54.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.384 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:54.384 02:31:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:54.384 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.384 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:14:54.384 00:14:54.384 --- 10.0.0.3 ping statistics --- 00:14:54.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.384 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:54.384 02:31:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:54.384 00:14:54.384 --- 10.0.0.1 ping statistics --- 00:14:54.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.384 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:54.384 02:31:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.384 02:31:35 -- nvmf/common.sh@421 -- # return 0 00:14:54.384 02:31:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:54.384 02:31:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.384 02:31:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:54.384 02:31:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:54.384 02:31:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.384 02:31:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:54.384 02:31:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:54.643 02:31:35 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:54.643 02:31:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:54.643 02:31:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.643 02:31:35 -- common/autotest_common.sh@10 -- # set +x 00:14:54.643 02:31:35 -- nvmf/common.sh@469 -- # nvmfpid=73016 00:14:54.643 02:31:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:54.643 02:31:35 -- nvmf/common.sh@470 -- # waitforlisten 73016 00:14:54.643 02:31:35 -- common/autotest_common.sh@829 -- # '[' -z 73016 ']' 00:14:54.643 02:31:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.643 02:31:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.643 02:31:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.643 02:31:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.643 02:31:35 -- common/autotest_common.sh@10 -- # set +x 00:14:54.643 [2024-11-21 02:31:35.080531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:54.643 [2024-11-21 02:31:35.080606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.643 [2024-11-21 02:31:35.211510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.902 [2024-11-21 02:31:35.309457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.902 [2024-11-21 02:31:35.309622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.902 [2024-11-21 02:31:35.309636] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.902 [2024-11-21 02:31:35.309646] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.903 [2024-11-21 02:31:35.309672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.491 02:31:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.491 02:31:36 -- common/autotest_common.sh@862 -- # return 0 00:14:55.491 02:31:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:55.491 02:31:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.491 02:31:36 -- common/autotest_common.sh@10 -- # set +x 00:14:55.491 02:31:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.750 02:31:36 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.009 [2024-11-21 02:31:36.406575] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:56.009 02:31:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:56.009 02:31:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.009 02:31:36 -- common/autotest_common.sh@10 -- # set +x 00:14:56.009 ************************************ 00:14:56.009 START TEST lvs_grow_clean 00:14:56.009 ************************************ 00:14:56.009 02:31:36 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:56.009 02:31:36 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:56.010 02:31:36 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:56.010 02:31:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:56.010 02:31:36 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.268 02:31:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:56.268 02:31:36 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:56.527 02:31:37 -- target/nvmf_lvs_grow.sh@28 -- # lvs=be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:14:56.528 02:31:37 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:14:56.528 02:31:37 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:56.787 02:31:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:56.787 02:31:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:56.787 02:31:37 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u be3c3602-ddeb-412f-8985-9217aaf5fd5d lvol 150 00:14:57.046 02:31:37 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6bba77c0-2935-422f-a7b8-476a38555388 00:14:57.046 02:31:37 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:57.046 02:31:37 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:57.046 [2024-11-21 02:31:37.653642] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:57.046 [2024-11-21 02:31:37.653702] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:57.046 true 00:14:57.046 02:31:37 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:14:57.046 02:31:37 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:57.305 02:31:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:57.305 02:31:37 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:57.564 02:31:38 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6bba77c0-2935-422f-a7b8-476a38555388 00:14:57.822 02:31:38 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:58.081 [2024-11-21 02:31:38.690169] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.081 02:31:38 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:58.340 02:31:38 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:58.340 02:31:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73178 00:14:58.340 02:31:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.340 02:31:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73178 /var/tmp/bdevperf.sock 00:14:58.341 02:31:38 -- common/autotest_common.sh@829 -- # '[' -z 73178 ']' 00:14:58.341 02:31:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.341 02:31:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.341 02:31:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.341 02:31:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.341 02:31:38 -- common/autotest_common.sh@10 -- # set +x 00:14:58.341 [2024-11-21 02:31:38.958505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.341 [2024-11-21 02:31:38.958579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73178 ] 00:14:58.599 [2024-11-21 02:31:39.081923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.599 [2024-11-21 02:31:39.168038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.535 02:31:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.535 02:31:39 -- common/autotest_common.sh@862 -- # return 0 00:14:59.535 02:31:39 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:59.793 Nvme0n1 00:14:59.793 02:31:40 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:00.053 [ 00:15:00.053 { 00:15:00.053 "aliases": [ 00:15:00.053 "6bba77c0-2935-422f-a7b8-476a38555388" 00:15:00.053 ], 00:15:00.053 "assigned_rate_limits": { 00:15:00.053 "r_mbytes_per_sec": 0, 00:15:00.053 "rw_ios_per_sec": 0, 00:15:00.053 "rw_mbytes_per_sec": 0, 00:15:00.053 "w_mbytes_per_sec": 0 00:15:00.053 }, 00:15:00.053 "block_size": 4096, 00:15:00.053 "claimed": false, 00:15:00.053 "driver_specific": { 00:15:00.053 "mp_policy": "active_passive", 00:15:00.053 "nvme": [ 00:15:00.053 { 00:15:00.053 "ctrlr_data": { 00:15:00.053 "ana_reporting": false, 00:15:00.053 "cntlid": 1, 00:15:00.053 "firmware_revision": "24.01.1", 00:15:00.053 "model_number": "SPDK bdev Controller", 00:15:00.053 "multi_ctrlr": true, 00:15:00.053 "oacs": { 00:15:00.053 "firmware": 0, 00:15:00.053 "format": 0, 00:15:00.053 "ns_manage": 0, 00:15:00.053 "security": 0 00:15:00.053 }, 00:15:00.053 "serial_number": "SPDK0", 00:15:00.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.053 "vendor_id": "0x8086" 00:15:00.053 }, 00:15:00.053 "ns_data": { 00:15:00.053 "can_share": true, 00:15:00.053 "id": 1 00:15:00.053 }, 00:15:00.053 "trid": { 00:15:00.053 "adrfam": "IPv4", 00:15:00.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.053 "traddr": "10.0.0.2", 00:15:00.053 "trsvcid": "4420", 00:15:00.053 "trtype": "TCP" 00:15:00.053 }, 00:15:00.053 "vs": { 00:15:00.053 "nvme_version": "1.3" 00:15:00.053 } 00:15:00.053 } 00:15:00.053 ] 00:15:00.053 }, 00:15:00.053 "name": "Nvme0n1", 00:15:00.053 "num_blocks": 38912, 00:15:00.053 "product_name": "NVMe disk", 00:15:00.053 "supported_io_types": { 00:15:00.053 "abort": true, 00:15:00.053 "compare": true, 00:15:00.053 "compare_and_write": true, 00:15:00.053 "flush": true, 00:15:00.053 "nvme_admin": true, 00:15:00.053 "nvme_io": true, 00:15:00.053 "read": true, 00:15:00.053 "reset": true, 00:15:00.053 "unmap": true, 00:15:00.053 "write": true, 00:15:00.053 "write_zeroes": true 00:15:00.053 }, 00:15:00.053 "uuid": "6bba77c0-2935-422f-a7b8-476a38555388", 00:15:00.053 "zoned": false 00:15:00.053 } 00:15:00.053 ] 00:15:00.053 02:31:40 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.053 02:31:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73224 00:15:00.053 02:31:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:00.053 Running I/O for 10 seconds... 00:15:00.991 Latency(us) 00:15:00.991 [2024-11-21T02:31:41.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.991 [2024-11-21T02:31:41.638Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.991 Nvme0n1 : 1.00 9902.00 38.68 0.00 0.00 0.00 0.00 0.00 00:15:00.991 [2024-11-21T02:31:41.638Z] =================================================================================================================== 00:15:00.991 [2024-11-21T02:31:41.638Z] Total : 9902.00 38.68 0.00 0.00 0.00 0.00 0.00 00:15:00.991 00:15:01.927 02:31:42 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:01.927 [2024-11-21T02:31:42.574Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.927 Nvme0n1 : 2.00 10083.00 39.39 0.00 0.00 0.00 0.00 0.00 00:15:01.927 [2024-11-21T02:31:42.574Z] =================================================================================================================== 00:15:01.927 [2024-11-21T02:31:42.574Z] Total : 10083.00 39.39 0.00 0.00 0.00 0.00 0.00 00:15:01.927 00:15:02.186 true 00:15:02.186 02:31:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:02.186 02:31:42 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:02.754 02:31:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:02.754 02:31:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:02.754 02:31:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 73224 00:15:03.012 [2024-11-21T02:31:43.659Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.012 Nvme0n1 : 3.00 10010.00 39.10 0.00 0.00 0.00 0.00 0.00 00:15:03.012 [2024-11-21T02:31:43.659Z] =================================================================================================================== 00:15:03.012 [2024-11-21T02:31:43.660Z] Total : 10010.00 39.10 0.00 0.00 0.00 0.00 0.00 00:15:03.013 00:15:03.949 [2024-11-21T02:31:44.596Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.949 Nvme0n1 : 4.00 9837.25 38.43 0.00 0.00 0.00 0.00 0.00 00:15:03.949 [2024-11-21T02:31:44.596Z] =================================================================================================================== 00:15:03.949 [2024-11-21T02:31:44.596Z] Total : 9837.25 38.43 0.00 0.00 0.00 0.00 0.00 00:15:03.949 00:15:05.326 [2024-11-21T02:31:45.973Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.326 Nvme0n1 : 5.00 9848.20 38.47 0.00 0.00 0.00 0.00 0.00 00:15:05.326 [2024-11-21T02:31:45.973Z] =================================================================================================================== 00:15:05.326 [2024-11-21T02:31:45.973Z] Total : 9848.20 38.47 0.00 0.00 0.00 0.00 0.00 00:15:05.326 00:15:06.263 [2024-11-21T02:31:46.910Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.263 Nvme0n1 : 6.00 9873.33 38.57 0.00 0.00 0.00 0.00 0.00 00:15:06.263 [2024-11-21T02:31:46.910Z] =================================================================================================================== 00:15:06.263 [2024-11-21T02:31:46.910Z] Total : 9873.33 38.57 0.00 0.00 0.00 0.00 0.00 00:15:06.263 00:15:07.199 [2024-11-21T02:31:47.846Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.200 Nvme0n1 : 7.00 9801.14 38.29 0.00 0.00 0.00 0.00 0.00 00:15:07.200 [2024-11-21T02:31:47.847Z] =================================================================================================================== 00:15:07.200 [2024-11-21T02:31:47.847Z] Total : 9801.14 38.29 0.00 0.00 0.00 0.00 0.00 00:15:07.200 00:15:08.136 [2024-11-21T02:31:48.783Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.136 Nvme0n1 : 8.00 9440.00 36.88 0.00 0.00 0.00 0.00 0.00 00:15:08.136 [2024-11-21T02:31:48.783Z] =================================================================================================================== 00:15:08.136 [2024-11-21T02:31:48.783Z] Total : 9440.00 36.88 0.00 0.00 0.00 0.00 0.00 00:15:08.136 00:15:09.075 [2024-11-21T02:31:49.722Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.075 Nvme0n1 : 9.00 9159.00 35.78 0.00 0.00 0.00 0.00 0.00 00:15:09.075 [2024-11-21T02:31:49.722Z] =================================================================================================================== 00:15:09.075 [2024-11-21T02:31:49.722Z] Total : 9159.00 35.78 0.00 0.00 0.00 0.00 0.00 00:15:09.075 00:15:10.009 [2024-11-21T02:31:50.656Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.009 Nvme0n1 : 10.00 8941.40 34.93 0.00 0.00 0.00 0.00 0.00 00:15:10.009 [2024-11-21T02:31:50.656Z] =================================================================================================================== 00:15:10.009 [2024-11-21T02:31:50.656Z] Total : 8941.40 34.93 0.00 0.00 0.00 0.00 0.00 00:15:10.009 00:15:10.009 00:15:10.009 Latency(us) 00:15:10.009 [2024-11-21T02:31:50.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.009 [2024-11-21T02:31:50.656Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.009 Nvme0n1 : 10.01 8945.03 34.94 0.00 0.00 14300.34 4408.79 116296.61 00:15:10.009 [2024-11-21T02:31:50.656Z] =================================================================================================================== 00:15:10.009 [2024-11-21T02:31:50.656Z] Total : 8945.03 34.94 0.00 0.00 14300.34 4408.79 116296.61 00:15:10.009 0 00:15:10.009 02:31:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73178 00:15:10.009 02:31:50 -- common/autotest_common.sh@936 -- # '[' -z 73178 ']' 00:15:10.009 02:31:50 -- common/autotest_common.sh@940 -- # kill -0 73178 00:15:10.009 02:31:50 -- common/autotest_common.sh@941 -- # uname 00:15:10.009 02:31:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.009 02:31:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73178 00:15:10.009 02:31:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:10.009 02:31:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:10.009 killing process with pid 73178 00:15:10.009 02:31:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73178' 00:15:10.009 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.009 00:15:10.009 Latency(us) 00:15:10.009 [2024-11-21T02:31:50.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.009 [2024-11-21T02:31:50.656Z] =================================================================================================================== 00:15:10.009 [2024-11-21T02:31:50.656Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.009 02:31:50 -- common/autotest_common.sh@955 -- # kill 73178 00:15:10.009 02:31:50 -- common/autotest_common.sh@960 -- # wait 73178 00:15:10.575 02:31:50 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:10.575 02:31:51 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:10.575 02:31:51 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:10.834 02:31:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:10.834 02:31:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:10.834 02:31:51 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.093 [2024-11-21 02:31:51.682404] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:11.094 02:31:51 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:11.094 02:31:51 -- common/autotest_common.sh@650 -- # local es=0 00:15:11.094 02:31:51 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:11.094 02:31:51 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.094 02:31:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.094 02:31:51 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.094 02:31:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.094 02:31:51 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.094 02:31:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.094 02:31:51 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.094 02:31:51 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:11.094 02:31:51 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:11.353 2024/11/21 02:31:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:be3c3602-ddeb-412f-8985-9217aaf5fd5d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:11.353 request: 00:15:11.353 { 00:15:11.353 "method": "bdev_lvol_get_lvstores", 00:15:11.353 "params": { 00:15:11.353 "uuid": "be3c3602-ddeb-412f-8985-9217aaf5fd5d" 00:15:11.353 } 00:15:11.353 } 00:15:11.353 Got JSON-RPC error response 00:15:11.353 GoRPCClient: error on JSON-RPC call 00:15:11.353 02:31:51 -- common/autotest_common.sh@653 -- # es=1 00:15:11.353 02:31:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.353 02:31:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.353 02:31:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.353 02:31:51 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.611 aio_bdev 00:15:11.611 02:31:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6bba77c0-2935-422f-a7b8-476a38555388 00:15:11.611 02:31:52 -- common/autotest_common.sh@897 -- # local bdev_name=6bba77c0-2935-422f-a7b8-476a38555388 00:15:11.611 02:31:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.611 02:31:52 -- common/autotest_common.sh@899 -- # local i 00:15:11.611 02:31:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.611 02:31:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.611 02:31:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.869 02:31:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6bba77c0-2935-422f-a7b8-476a38555388 -t 2000 00:15:12.129 [ 00:15:12.129 { 00:15:12.129 "aliases": [ 00:15:12.129 "lvs/lvol" 00:15:12.129 ], 00:15:12.129 "assigned_rate_limits": { 00:15:12.129 "r_mbytes_per_sec": 0, 00:15:12.129 "rw_ios_per_sec": 0, 00:15:12.129 "rw_mbytes_per_sec": 0, 00:15:12.129 "w_mbytes_per_sec": 0 00:15:12.129 }, 00:15:12.129 "block_size": 4096, 00:15:12.129 "claimed": false, 00:15:12.129 "driver_specific": { 00:15:12.129 "lvol": { 00:15:12.129 "base_bdev": "aio_bdev", 00:15:12.129 "clone": false, 00:15:12.129 "esnap_clone": false, 00:15:12.129 "lvol_store_uuid": "be3c3602-ddeb-412f-8985-9217aaf5fd5d", 00:15:12.129 "snapshot": false, 00:15:12.129 "thin_provision": false 00:15:12.129 } 00:15:12.129 }, 00:15:12.129 "name": "6bba77c0-2935-422f-a7b8-476a38555388", 00:15:12.129 "num_blocks": 38912, 00:15:12.129 "product_name": "Logical Volume", 00:15:12.129 "supported_io_types": { 00:15:12.129 "abort": false, 00:15:12.129 "compare": false, 00:15:12.129 "compare_and_write": false, 00:15:12.129 "flush": false, 00:15:12.129 "nvme_admin": false, 00:15:12.129 "nvme_io": false, 00:15:12.129 "read": true, 00:15:12.129 "reset": true, 00:15:12.129 "unmap": true, 00:15:12.129 "write": true, 00:15:12.129 "write_zeroes": true 00:15:12.129 }, 00:15:12.129 "uuid": "6bba77c0-2935-422f-a7b8-476a38555388", 00:15:12.129 "zoned": false 00:15:12.129 } 00:15:12.129 ] 00:15:12.129 02:31:52 -- common/autotest_common.sh@905 -- # return 0 00:15:12.129 02:31:52 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:12.129 02:31:52 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:12.388 02:31:52 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:12.388 02:31:52 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:12.388 02:31:52 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:12.646 02:31:53 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:12.646 02:31:53 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6bba77c0-2935-422f-a7b8-476a38555388 00:15:12.905 02:31:53 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u be3c3602-ddeb-412f-8985-9217aaf5fd5d 00:15:13.179 02:31:53 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.528 02:31:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.787 ************************************ 00:15:13.787 END TEST lvs_grow_clean 00:15:13.787 ************************************ 00:15:13.787 00:15:13.787 real 0m17.783s 00:15:13.787 user 0m17.131s 00:15:13.787 sys 0m2.142s 00:15:13.787 02:31:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.787 02:31:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:13.787 02:31:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.787 02:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.787 02:31:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.787 ************************************ 00:15:13.787 START TEST lvs_grow_dirty 00:15:13.787 ************************************ 00:15:13.787 02:31:54 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.787 02:31:54 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:14.046 02:31:54 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:14.046 02:31:54 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:14.305 02:31:54 -- target/nvmf_lvs_grow.sh@28 -- # lvs=27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:14.305 02:31:54 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:14.305 02:31:54 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:14.564 02:31:55 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:14.564 02:31:55 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:14.564 02:31:55 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd lvol 150 00:15:14.823 02:31:55 -- target/nvmf_lvs_grow.sh@33 -- # lvol=817441e3-c874-4c95-8799-6b66d6d7092b 00:15:14.823 02:31:55 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:14.823 02:31:55 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:15.082 [2024-11-21 02:31:55.617668] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:15.082 [2024-11-21 02:31:55.617760] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:15.082 true 00:15:15.082 02:31:55 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:15.082 02:31:55 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:15.341 02:31:55 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:15.341 02:31:55 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:15.599 02:31:56 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 817441e3-c874-4c95-8799-6b66d6d7092b 00:15:15.858 02:31:56 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:16.117 02:31:56 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.376 02:31:56 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73612 00:15:16.376 02:31:56 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:16.376 02:31:56 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.376 02:31:56 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73612 /var/tmp/bdevperf.sock 00:15:16.376 02:31:56 -- common/autotest_common.sh@829 -- # '[' -z 73612 ']' 00:15:16.376 02:31:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.376 02:31:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.376 02:31:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.376 02:31:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.376 02:31:56 -- common/autotest_common.sh@10 -- # set +x 00:15:16.376 [2024-11-21 02:31:56.812293] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.376 [2024-11-21 02:31:56.812365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73612 ] 00:15:16.376 [2024-11-21 02:31:56.946818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.635 [2024-11-21 02:31:57.045816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.200 02:31:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.200 02:31:57 -- common/autotest_common.sh@862 -- # return 0 00:15:17.200 02:31:57 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:17.458 Nvme0n1 00:15:17.458 02:31:58 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:17.717 [ 00:15:17.717 { 00:15:17.717 "aliases": [ 00:15:17.717 "817441e3-c874-4c95-8799-6b66d6d7092b" 00:15:17.717 ], 00:15:17.717 "assigned_rate_limits": { 00:15:17.717 "r_mbytes_per_sec": 0, 00:15:17.717 "rw_ios_per_sec": 0, 00:15:17.717 "rw_mbytes_per_sec": 0, 00:15:17.717 "w_mbytes_per_sec": 0 00:15:17.717 }, 00:15:17.717 "block_size": 4096, 00:15:17.717 "claimed": false, 00:15:17.717 "driver_specific": { 00:15:17.717 "mp_policy": "active_passive", 00:15:17.717 "nvme": [ 00:15:17.717 { 00:15:17.717 "ctrlr_data": { 00:15:17.717 "ana_reporting": false, 00:15:17.717 "cntlid": 1, 00:15:17.717 "firmware_revision": "24.01.1", 00:15:17.717 "model_number": "SPDK bdev Controller", 00:15:17.717 "multi_ctrlr": true, 00:15:17.717 "oacs": { 00:15:17.717 "firmware": 0, 00:15:17.717 "format": 0, 00:15:17.717 "ns_manage": 0, 00:15:17.717 "security": 0 00:15:17.717 }, 00:15:17.717 "serial_number": "SPDK0", 00:15:17.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.717 "vendor_id": "0x8086" 00:15:17.717 }, 00:15:17.717 "ns_data": { 00:15:17.717 "can_share": true, 00:15:17.717 "id": 1 00:15:17.717 }, 00:15:17.717 "trid": { 00:15:17.717 "adrfam": "IPv4", 00:15:17.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:17.717 "traddr": "10.0.0.2", 00:15:17.717 "trsvcid": "4420", 00:15:17.717 "trtype": "TCP" 00:15:17.717 }, 00:15:17.717 "vs": { 00:15:17.717 "nvme_version": "1.3" 00:15:17.717 } 00:15:17.717 } 00:15:17.717 ] 00:15:17.717 }, 00:15:17.717 "name": "Nvme0n1", 00:15:17.717 "num_blocks": 38912, 00:15:17.717 "product_name": "NVMe disk", 00:15:17.717 "supported_io_types": { 00:15:17.717 "abort": true, 00:15:17.717 "compare": true, 00:15:17.717 "compare_and_write": true, 00:15:17.717 "flush": true, 00:15:17.717 "nvme_admin": true, 00:15:17.717 "nvme_io": true, 00:15:17.717 "read": true, 00:15:17.717 "reset": true, 00:15:17.717 "unmap": true, 00:15:17.717 "write": true, 00:15:17.717 "write_zeroes": true 00:15:17.717 }, 00:15:17.717 "uuid": "817441e3-c874-4c95-8799-6b66d6d7092b", 00:15:17.717 "zoned": false 00:15:17.717 } 00:15:17.717 ] 00:15:17.717 02:31:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73660 00:15:17.717 02:31:58 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.717 02:31:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:17.976 Running I/O for 10 seconds... 00:15:18.912 Latency(us) 00:15:18.912 [2024-11-21T02:31:59.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.912 [2024-11-21T02:31:59.559Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.912 Nvme0n1 : 1.00 10323.00 40.32 0.00 0.00 0.00 0.00 0.00 00:15:18.912 [2024-11-21T02:31:59.559Z] =================================================================================================================== 00:15:18.912 [2024-11-21T02:31:59.559Z] Total : 10323.00 40.32 0.00 0.00 0.00 0.00 0.00 00:15:18.912 00:15:19.847 02:32:00 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:19.847 [2024-11-21T02:32:00.494Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.847 Nvme0n1 : 2.00 10157.00 39.68 0.00 0.00 0.00 0.00 0.00 00:15:19.847 [2024-11-21T02:32:00.494Z] =================================================================================================================== 00:15:19.847 [2024-11-21T02:32:00.494Z] Total : 10157.00 39.68 0.00 0.00 0.00 0.00 0.00 00:15:19.847 00:15:20.106 true 00:15:20.106 02:32:00 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:20.106 02:32:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:20.364 02:32:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:20.364 02:32:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:20.364 02:32:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 73660 00:15:20.931 [2024-11-21T02:32:01.578Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.931 Nvme0n1 : 3.00 9696.00 37.88 0.00 0.00 0.00 0.00 0.00 00:15:20.931 [2024-11-21T02:32:01.578Z] =================================================================================================================== 00:15:20.931 [2024-11-21T02:32:01.578Z] Total : 9696.00 37.88 0.00 0.00 0.00 0.00 0.00 00:15:20.931 00:15:21.866 [2024-11-21T02:32:02.513Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.866 Nvme0n1 : 4.00 8957.50 34.99 0.00 0.00 0.00 0.00 0.00 00:15:21.866 [2024-11-21T02:32:02.513Z] =================================================================================================================== 00:15:21.866 [2024-11-21T02:32:02.513Z] Total : 8957.50 34.99 0.00 0.00 0.00 0.00 0.00 00:15:21.866 00:15:22.803 [2024-11-21T02:32:03.450Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.803 Nvme0n1 : 5.00 8542.60 33.37 0.00 0.00 0.00 0.00 0.00 00:15:22.803 [2024-11-21T02:32:03.450Z] =================================================================================================================== 00:15:22.803 [2024-11-21T02:32:03.450Z] Total : 8542.60 33.37 0.00 0.00 0.00 0.00 0.00 00:15:22.803 00:15:24.179 [2024-11-21T02:32:04.826Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.179 Nvme0n1 : 6.00 8249.83 32.23 0.00 0.00 0.00 0.00 0.00 00:15:24.179 [2024-11-21T02:32:04.826Z] =================================================================================================================== 00:15:24.179 [2024-11-21T02:32:04.826Z] Total : 8249.83 32.23 0.00 0.00 0.00 0.00 0.00 00:15:24.179 00:15:25.114 [2024-11-21T02:32:05.761Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.114 Nvme0n1 : 7.00 7772.00 30.36 0.00 0.00 0.00 0.00 0.00 00:15:25.114 [2024-11-21T02:32:05.761Z] =================================================================================================================== 00:15:25.114 [2024-11-21T02:32:05.761Z] Total : 7772.00 30.36 0.00 0.00 0.00 0.00 0.00 00:15:25.114 00:15:26.049 [2024-11-21T02:32:06.696Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.049 Nvme0n1 : 8.00 7909.38 30.90 0.00 0.00 0.00 0.00 0.00 00:15:26.049 [2024-11-21T02:32:06.696Z] =================================================================================================================== 00:15:26.049 [2024-11-21T02:32:06.696Z] Total : 7909.38 30.90 0.00 0.00 0.00 0.00 0.00 00:15:26.049 00:15:26.985 [2024-11-21T02:32:07.632Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.985 Nvme0n1 : 9.00 8135.11 31.78 0.00 0.00 0.00 0.00 0.00 00:15:26.985 [2024-11-21T02:32:07.632Z] =================================================================================================================== 00:15:26.985 [2024-11-21T02:32:07.632Z] Total : 8135.11 31.78 0.00 0.00 0.00 0.00 0.00 00:15:26.985 00:15:27.921 [2024-11-21T02:32:08.568Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.921 Nvme0n1 : 10.00 8228.90 32.14 0.00 0.00 0.00 0.00 0.00 00:15:27.921 [2024-11-21T02:32:08.568Z] =================================================================================================================== 00:15:27.921 [2024-11-21T02:32:08.568Z] Total : 8228.90 32.14 0.00 0.00 0.00 0.00 0.00 00:15:27.921 00:15:27.921 00:15:27.921 Latency(us) 00:15:27.921 [2024-11-21T02:32:08.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.921 [2024-11-21T02:32:08.568Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.921 Nvme0n1 : 10.01 8233.22 32.16 0.00 0.00 15537.91 5272.67 305040.29 00:15:27.921 [2024-11-21T02:32:08.568Z] =================================================================================================================== 00:15:27.921 [2024-11-21T02:32:08.568Z] Total : 8233.22 32.16 0.00 0.00 15537.91 5272.67 305040.29 00:15:27.921 0 00:15:27.921 02:32:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73612 00:15:27.921 02:32:08 -- common/autotest_common.sh@936 -- # '[' -z 73612 ']' 00:15:27.921 02:32:08 -- common/autotest_common.sh@940 -- # kill -0 73612 00:15:27.921 02:32:08 -- common/autotest_common.sh@941 -- # uname 00:15:27.921 02:32:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.921 02:32:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73612 00:15:27.921 killing process with pid 73612 00:15:27.921 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.921 00:15:27.921 Latency(us) 00:15:27.921 [2024-11-21T02:32:08.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.921 [2024-11-21T02:32:08.568Z] =================================================================================================================== 00:15:27.921 [2024-11-21T02:32:08.568Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.921 02:32:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:27.921 02:32:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:27.921 02:32:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73612' 00:15:27.921 02:32:08 -- common/autotest_common.sh@955 -- # kill 73612 00:15:27.921 02:32:08 -- common/autotest_common.sh@960 -- # wait 73612 00:15:28.180 02:32:08 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:28.438 02:32:09 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:28.438 02:32:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:28.710 02:32:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:28.710 02:32:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:28.710 02:32:09 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73016 00:15:28.710 02:32:09 -- target/nvmf_lvs_grow.sh@74 -- # wait 73016 00:15:28.710 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73016 Killed "${NVMF_APP[@]}" "$@" 00:15:28.710 02:32:09 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:28.710 02:32:09 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:28.710 02:32:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.710 02:32:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.710 02:32:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.710 02:32:09 -- nvmf/common.sh@469 -- # nvmfpid=73810 00:15:28.710 02:32:09 -- nvmf/common.sh@470 -- # waitforlisten 73810 00:15:28.710 02:32:09 -- common/autotest_common.sh@829 -- # '[' -z 73810 ']' 00:15:28.710 02:32:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.710 02:32:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:28.710 02:32:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.710 02:32:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.710 02:32:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.710 02:32:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.710 [2024-11-21 02:32:09.307995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:28.710 [2024-11-21 02:32:09.308100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.970 [2024-11-21 02:32:09.444676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.970 [2024-11-21 02:32:09.528037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.970 [2024-11-21 02:32:09.528179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.970 [2024-11-21 02:32:09.528190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.970 [2024-11-21 02:32:09.528199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.970 [2024-11-21 02:32:09.528224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.904 02:32:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.904 02:32:10 -- common/autotest_common.sh@862 -- # return 0 00:15:29.904 02:32:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.904 02:32:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.904 02:32:10 -- common/autotest_common.sh@10 -- # set +x 00:15:29.904 02:32:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.904 02:32:10 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:29.904 [2024-11-21 02:32:10.526409] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:29.904 [2024-11-21 02:32:10.526794] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:29.904 [2024-11-21 02:32:10.526969] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:30.162 02:32:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:30.162 02:32:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 817441e3-c874-4c95-8799-6b66d6d7092b 00:15:30.162 02:32:10 -- common/autotest_common.sh@897 -- # local bdev_name=817441e3-c874-4c95-8799-6b66d6d7092b 00:15:30.162 02:32:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.162 02:32:10 -- common/autotest_common.sh@899 -- # local i 00:15:30.162 02:32:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.162 02:32:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.162 02:32:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:30.421 02:32:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 817441e3-c874-4c95-8799-6b66d6d7092b -t 2000 00:15:30.421 [ 00:15:30.421 { 00:15:30.421 "aliases": [ 00:15:30.421 "lvs/lvol" 00:15:30.421 ], 00:15:30.421 "assigned_rate_limits": { 00:15:30.421 "r_mbytes_per_sec": 0, 00:15:30.421 "rw_ios_per_sec": 0, 00:15:30.421 "rw_mbytes_per_sec": 0, 00:15:30.421 "w_mbytes_per_sec": 0 00:15:30.421 }, 00:15:30.421 "block_size": 4096, 00:15:30.421 "claimed": false, 00:15:30.421 "driver_specific": { 00:15:30.421 "lvol": { 00:15:30.421 "base_bdev": "aio_bdev", 00:15:30.421 "clone": false, 00:15:30.421 "esnap_clone": false, 00:15:30.421 "lvol_store_uuid": "27bdd40b-b643-4c11-a9fd-028f9b654fcd", 00:15:30.421 "snapshot": false, 00:15:30.421 "thin_provision": false 00:15:30.421 } 00:15:30.421 }, 00:15:30.421 "name": "817441e3-c874-4c95-8799-6b66d6d7092b", 00:15:30.421 "num_blocks": 38912, 00:15:30.421 "product_name": "Logical Volume", 00:15:30.421 "supported_io_types": { 00:15:30.421 "abort": false, 00:15:30.421 "compare": false, 00:15:30.421 "compare_and_write": false, 00:15:30.421 "flush": false, 00:15:30.421 "nvme_admin": false, 00:15:30.421 "nvme_io": false, 00:15:30.421 "read": true, 00:15:30.422 "reset": true, 00:15:30.422 "unmap": true, 00:15:30.422 "write": true, 00:15:30.422 "write_zeroes": true 00:15:30.422 }, 00:15:30.422 "uuid": "817441e3-c874-4c95-8799-6b66d6d7092b", 00:15:30.422 "zoned": false 00:15:30.422 } 00:15:30.422 ] 00:15:30.680 02:32:11 -- common/autotest_common.sh@905 -- # return 0 00:15:30.680 02:32:11 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:30.680 02:32:11 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:30.939 02:32:11 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:30.939 02:32:11 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:30.939 02:32:11 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:30.939 02:32:11 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:30.939 02:32:11 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:31.198 [2024-11-21 02:32:11.715952] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:31.198 02:32:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:31.198 02:32:11 -- common/autotest_common.sh@650 -- # local es=0 00:15:31.198 02:32:11 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:31.198 02:32:11 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.198 02:32:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.198 02:32:11 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.198 02:32:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.198 02:32:11 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.198 02:32:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.198 02:32:11 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.198 02:32:11 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:31.198 02:32:11 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:31.456 2024/11/21 02:32:11 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:27bdd40b-b643-4c11-a9fd-028f9b654fcd], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:31.456 request: 00:15:31.456 { 00:15:31.456 "method": "bdev_lvol_get_lvstores", 00:15:31.456 "params": { 00:15:31.456 "uuid": "27bdd40b-b643-4c11-a9fd-028f9b654fcd" 00:15:31.456 } 00:15:31.456 } 00:15:31.456 Got JSON-RPC error response 00:15:31.456 GoRPCClient: error on JSON-RPC call 00:15:31.456 02:32:11 -- common/autotest_common.sh@653 -- # es=1 00:15:31.456 02:32:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.456 02:32:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.456 02:32:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.456 02:32:11 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:31.715 aio_bdev 00:15:31.715 02:32:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 817441e3-c874-4c95-8799-6b66d6d7092b 00:15:31.715 02:32:12 -- common/autotest_common.sh@897 -- # local bdev_name=817441e3-c874-4c95-8799-6b66d6d7092b 00:15:31.715 02:32:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:31.715 02:32:12 -- common/autotest_common.sh@899 -- # local i 00:15:31.715 02:32:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:31.715 02:32:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:31.715 02:32:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:31.973 02:32:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 817441e3-c874-4c95-8799-6b66d6d7092b -t 2000 00:15:32.232 [ 00:15:32.232 { 00:15:32.232 "aliases": [ 00:15:32.232 "lvs/lvol" 00:15:32.232 ], 00:15:32.232 "assigned_rate_limits": { 00:15:32.232 "r_mbytes_per_sec": 0, 00:15:32.232 "rw_ios_per_sec": 0, 00:15:32.232 "rw_mbytes_per_sec": 0, 00:15:32.232 "w_mbytes_per_sec": 0 00:15:32.232 }, 00:15:32.232 "block_size": 4096, 00:15:32.232 "claimed": false, 00:15:32.232 "driver_specific": { 00:15:32.232 "lvol": { 00:15:32.232 "base_bdev": "aio_bdev", 00:15:32.232 "clone": false, 00:15:32.232 "esnap_clone": false, 00:15:32.232 "lvol_store_uuid": "27bdd40b-b643-4c11-a9fd-028f9b654fcd", 00:15:32.232 "snapshot": false, 00:15:32.232 "thin_provision": false 00:15:32.232 } 00:15:32.232 }, 00:15:32.232 "name": "817441e3-c874-4c95-8799-6b66d6d7092b", 00:15:32.232 "num_blocks": 38912, 00:15:32.232 "product_name": "Logical Volume", 00:15:32.232 "supported_io_types": { 00:15:32.232 "abort": false, 00:15:32.232 "compare": false, 00:15:32.232 "compare_and_write": false, 00:15:32.232 "flush": false, 00:15:32.232 "nvme_admin": false, 00:15:32.232 "nvme_io": false, 00:15:32.232 "read": true, 00:15:32.232 "reset": true, 00:15:32.232 "unmap": true, 00:15:32.232 "write": true, 00:15:32.232 "write_zeroes": true 00:15:32.232 }, 00:15:32.232 "uuid": "817441e3-c874-4c95-8799-6b66d6d7092b", 00:15:32.232 "zoned": false 00:15:32.232 } 00:15:32.232 ] 00:15:32.232 02:32:12 -- common/autotest_common.sh@905 -- # return 0 00:15:32.232 02:32:12 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:32.232 02:32:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:32.490 02:32:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:32.490 02:32:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:32.490 02:32:12 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:32.490 02:32:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:32.490 02:32:13 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 817441e3-c874-4c95-8799-6b66d6d7092b 00:15:32.748 02:32:13 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27bdd40b-b643-4c11-a9fd-028f9b654fcd 00:15:33.007 02:32:13 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:33.265 02:32:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:33.832 00:15:33.832 real 0m19.901s 00:15:33.832 user 0m40.191s 00:15:33.832 sys 0m8.595s 00:15:33.832 02:32:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:33.832 02:32:14 -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 ************************************ 00:15:33.832 END TEST lvs_grow_dirty 00:15:33.832 ************************************ 00:15:33.832 02:32:14 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:33.832 02:32:14 -- common/autotest_common.sh@806 -- # type=--id 00:15:33.832 02:32:14 -- common/autotest_common.sh@807 -- # id=0 00:15:33.832 02:32:14 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:33.832 02:32:14 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:33.832 02:32:14 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:33.832 02:32:14 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:33.832 02:32:14 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:33.832 02:32:14 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:33.832 nvmf_trace.0 00:15:33.832 02:32:14 -- common/autotest_common.sh@821 -- # return 0 00:15:33.832 02:32:14 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:33.832 02:32:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:33.832 02:32:14 -- nvmf/common.sh@116 -- # sync 00:15:34.398 02:32:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:34.398 02:32:14 -- nvmf/common.sh@119 -- # set +e 00:15:34.398 02:32:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:34.398 02:32:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:34.398 rmmod nvme_tcp 00:15:34.398 rmmod nvme_fabrics 00:15:34.398 rmmod nvme_keyring 00:15:34.398 02:32:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:34.398 02:32:14 -- nvmf/common.sh@123 -- # set -e 00:15:34.398 02:32:14 -- nvmf/common.sh@124 -- # return 0 00:15:34.398 02:32:14 -- nvmf/common.sh@477 -- # '[' -n 73810 ']' 00:15:34.398 02:32:14 -- nvmf/common.sh@478 -- # killprocess 73810 00:15:34.398 02:32:14 -- common/autotest_common.sh@936 -- # '[' -z 73810 ']' 00:15:34.398 02:32:14 -- common/autotest_common.sh@940 -- # kill -0 73810 00:15:34.398 02:32:14 -- common/autotest_common.sh@941 -- # uname 00:15:34.399 02:32:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.399 02:32:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73810 00:15:34.399 02:32:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:34.399 killing process with pid 73810 00:15:34.399 02:32:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:34.399 02:32:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73810' 00:15:34.399 02:32:14 -- common/autotest_common.sh@955 -- # kill 73810 00:15:34.399 02:32:14 -- common/autotest_common.sh@960 -- # wait 73810 00:15:34.657 02:32:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:34.657 02:32:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:34.657 02:32:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:34.657 02:32:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.657 02:32:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:34.657 02:32:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.657 02:32:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.657 02:32:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.657 02:32:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:34.657 00:15:34.657 real 0m40.757s 00:15:34.657 user 1m3.907s 00:15:34.657 sys 0m11.890s 00:15:34.657 02:32:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:34.657 ************************************ 00:15:34.657 END TEST nvmf_lvs_grow 00:15:34.657 ************************************ 00:15:34.657 02:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:34.657 02:32:15 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:34.657 02:32:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:34.657 02:32:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.657 02:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:34.657 ************************************ 00:15:34.657 START TEST nvmf_bdev_io_wait 00:15:34.657 ************************************ 00:15:34.657 02:32:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:34.916 * Looking for test storage... 00:15:34.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:34.916 02:32:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:34.917 02:32:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:34.917 02:32:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:34.917 02:32:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:34.917 02:32:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:34.917 02:32:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:34.917 02:32:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:34.917 02:32:15 -- scripts/common.sh@335 -- # IFS=.-: 00:15:34.917 02:32:15 -- scripts/common.sh@335 -- # read -ra ver1 00:15:34.917 02:32:15 -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.917 02:32:15 -- scripts/common.sh@336 -- # read -ra ver2 00:15:34.917 02:32:15 -- scripts/common.sh@337 -- # local 'op=<' 00:15:34.917 02:32:15 -- scripts/common.sh@339 -- # ver1_l=2 00:15:34.917 02:32:15 -- scripts/common.sh@340 -- # ver2_l=1 00:15:34.917 02:32:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:34.917 02:32:15 -- scripts/common.sh@343 -- # case "$op" in 00:15:34.917 02:32:15 -- scripts/common.sh@344 -- # : 1 00:15:34.917 02:32:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:34.917 02:32:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.917 02:32:15 -- scripts/common.sh@364 -- # decimal 1 00:15:34.917 02:32:15 -- scripts/common.sh@352 -- # local d=1 00:15:34.917 02:32:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.917 02:32:15 -- scripts/common.sh@354 -- # echo 1 00:15:34.917 02:32:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:34.917 02:32:15 -- scripts/common.sh@365 -- # decimal 2 00:15:34.917 02:32:15 -- scripts/common.sh@352 -- # local d=2 00:15:34.917 02:32:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.917 02:32:15 -- scripts/common.sh@354 -- # echo 2 00:15:34.917 02:32:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:34.917 02:32:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:34.917 02:32:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:34.917 02:32:15 -- scripts/common.sh@367 -- # return 0 00:15:34.917 02:32:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.917 02:32:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.917 --rc genhtml_branch_coverage=1 00:15:34.917 --rc genhtml_function_coverage=1 00:15:34.917 --rc genhtml_legend=1 00:15:34.917 --rc geninfo_all_blocks=1 00:15:34.917 --rc geninfo_unexecuted_blocks=1 00:15:34.917 00:15:34.917 ' 00:15:34.917 02:32:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.917 --rc genhtml_branch_coverage=1 00:15:34.917 --rc genhtml_function_coverage=1 00:15:34.917 --rc genhtml_legend=1 00:15:34.917 --rc geninfo_all_blocks=1 00:15:34.917 --rc geninfo_unexecuted_blocks=1 00:15:34.917 00:15:34.917 ' 00:15:34.917 02:32:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.917 --rc genhtml_branch_coverage=1 00:15:34.917 --rc genhtml_function_coverage=1 00:15:34.917 --rc genhtml_legend=1 00:15:34.917 --rc geninfo_all_blocks=1 00:15:34.917 --rc geninfo_unexecuted_blocks=1 00:15:34.917 00:15:34.917 ' 00:15:34.917 02:32:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.917 --rc genhtml_branch_coverage=1 00:15:34.917 --rc genhtml_function_coverage=1 00:15:34.917 --rc genhtml_legend=1 00:15:34.917 --rc geninfo_all_blocks=1 00:15:34.917 --rc geninfo_unexecuted_blocks=1 00:15:34.917 00:15:34.917 ' 00:15:34.917 02:32:15 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.917 02:32:15 -- nvmf/common.sh@7 -- # uname -s 00:15:34.917 02:32:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.917 02:32:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.917 02:32:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.917 02:32:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.917 02:32:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.917 02:32:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.917 02:32:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.917 02:32:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.917 02:32:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.917 02:32:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.917 02:32:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:15:34.917 02:32:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:15:34.917 02:32:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.917 02:32:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.917 02:32:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.917 02:32:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.917 02:32:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.917 02:32:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.917 02:32:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.917 02:32:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.917 02:32:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.917 02:32:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.917 02:32:15 -- paths/export.sh@5 -- # export PATH 00:15:34.917 02:32:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.917 02:32:15 -- nvmf/common.sh@46 -- # : 0 00:15:34.917 02:32:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:34.917 02:32:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:34.917 02:32:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:34.917 02:32:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.917 02:32:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.917 02:32:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:34.917 02:32:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:34.917 02:32:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:34.917 02:32:15 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.917 02:32:15 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.917 02:32:15 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:34.917 02:32:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:34.917 02:32:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.917 02:32:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:34.917 02:32:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:34.917 02:32:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:34.917 02:32:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.917 02:32:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.917 02:32:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.917 02:32:15 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:34.917 02:32:15 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:34.917 02:32:15 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:34.917 02:32:15 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:34.917 02:32:15 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:34.917 02:32:15 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:34.917 02:32:15 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.917 02:32:15 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.917 02:32:15 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:34.917 02:32:15 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:34.917 02:32:15 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.917 02:32:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.917 02:32:15 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.917 02:32:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.917 02:32:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.917 02:32:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.917 02:32:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.917 02:32:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.917 02:32:15 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:34.917 02:32:15 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:34.917 Cannot find device "nvmf_tgt_br" 00:15:34.917 02:32:15 -- nvmf/common.sh@154 -- # true 00:15:34.917 02:32:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.917 Cannot find device "nvmf_tgt_br2" 00:15:34.917 02:32:15 -- nvmf/common.sh@155 -- # true 00:15:34.917 02:32:15 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:34.917 02:32:15 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:34.917 Cannot find device "nvmf_tgt_br" 00:15:34.917 02:32:15 -- nvmf/common.sh@157 -- # true 00:15:34.917 02:32:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:34.917 Cannot find device "nvmf_tgt_br2" 00:15:34.917 02:32:15 -- nvmf/common.sh@158 -- # true 00:15:34.917 02:32:15 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:35.176 02:32:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:35.176 02:32:15 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.176 02:32:15 -- nvmf/common.sh@161 -- # true 00:15:35.176 02:32:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.176 02:32:15 -- nvmf/common.sh@162 -- # true 00:15:35.176 02:32:15 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.176 02:32:15 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.176 02:32:15 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.176 02:32:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.176 02:32:15 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.176 02:32:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.176 02:32:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.176 02:32:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.176 02:32:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.176 02:32:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:35.176 02:32:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:35.176 02:32:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:35.176 02:32:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:35.176 02:32:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.176 02:32:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.176 02:32:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.177 02:32:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:35.177 02:32:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:35.177 02:32:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.177 02:32:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.177 02:32:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.177 02:32:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.177 02:32:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.177 02:32:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:35.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:35.177 00:15:35.177 --- 10.0.0.2 ping statistics --- 00:15:35.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.177 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:35.177 02:32:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:35.177 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.177 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:35.177 00:15:35.177 --- 10.0.0.3 ping statistics --- 00:15:35.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.177 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:35.177 02:32:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:15:35.177 00:15:35.177 --- 10.0.0.1 ping statistics --- 00:15:35.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.177 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:35.177 02:32:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.177 02:32:15 -- nvmf/common.sh@421 -- # return 0 00:15:35.177 02:32:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:35.177 02:32:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.177 02:32:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:35.177 02:32:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:35.177 02:32:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.177 02:32:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:35.177 02:32:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:35.177 02:32:15 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:35.177 02:32:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:35.177 02:32:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.177 02:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:35.435 02:32:15 -- nvmf/common.sh@469 -- # nvmfpid=74241 00:15:35.435 02:32:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:35.435 02:32:15 -- nvmf/common.sh@470 -- # waitforlisten 74241 00:15:35.435 02:32:15 -- common/autotest_common.sh@829 -- # '[' -z 74241 ']' 00:15:35.435 02:32:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.435 02:32:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.435 02:32:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.435 02:32:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.435 02:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:35.435 [2024-11-21 02:32:15.879593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:35.435 [2024-11-21 02:32:15.879678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.435 [2024-11-21 02:32:16.012379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.694 [2024-11-21 02:32:16.102699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:35.694 [2024-11-21 02:32:16.102869] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.694 [2024-11-21 02:32:16.102882] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.694 [2024-11-21 02:32:16.102890] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.694 [2024-11-21 02:32:16.103024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.694 [2024-11-21 02:32:16.103423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.694 [2024-11-21 02:32:16.103909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.694 [2024-11-21 02:32:16.103918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.260 02:32:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.260 02:32:16 -- common/autotest_common.sh@862 -- # return 0 00:15:36.261 02:32:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:36.261 02:32:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.261 02:32:16 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 02:32:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.519 02:32:16 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:36.519 02:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:16 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 02:32:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:16 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:36.519 02:32:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:16 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 02:32:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.519 02:32:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 [2024-11-21 02:32:17.019926] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.519 02:32:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.519 02:32:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 Malloc0 00:15:36.519 02:32:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.519 02:32:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 02:32:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.519 02:32:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 02:32:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.519 02:32:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.519 02:32:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.519 [2024-11-21 02:32:17.077818] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.519 02:32:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74295 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@30 -- # READ_PID=74297 00:15:36.519 02:32:17 -- nvmf/common.sh@520 -- # config=() 00:15:36.519 02:32:17 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:36.519 02:32:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.519 02:32:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.519 { 00:15:36.519 "params": { 00:15:36.519 "name": "Nvme$subsystem", 00:15:36.519 "trtype": "$TEST_TRANSPORT", 00:15:36.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.519 "adrfam": "ipv4", 00:15:36.519 "trsvcid": "$NVMF_PORT", 00:15:36.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.519 "hdgst": ${hdgst:-false}, 00:15:36.519 "ddgst": ${ddgst:-false} 00:15:36.519 }, 00:15:36.519 "method": "bdev_nvme_attach_controller" 00:15:36.519 } 00:15:36.519 EOF 00:15:36.519 )") 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:36.519 02:32:17 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74299 00:15:36.519 02:32:17 -- nvmf/common.sh@520 -- # config=() 00:15:36.520 02:32:17 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.520 02:32:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.520 02:32:17 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.520 { 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme$subsystem", 00:15:36.520 "trtype": "$TEST_TRANSPORT", 00:15:36.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "$NVMF_PORT", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.520 "hdgst": ${hdgst:-false}, 00:15:36.520 "ddgst": ${ddgst:-false} 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 } 00:15:36.520 EOF 00:15:36.520 )") 00:15:36.520 02:32:17 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74302 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # cat 00:15:36.520 02:32:17 -- target/bdev_io_wait.sh@35 -- # sync 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # cat 00:15:36.520 02:32:17 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:36.520 02:32:17 -- nvmf/common.sh@520 -- # config=() 00:15:36.520 02:32:17 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:36.520 02:32:17 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.520 02:32:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.520 { 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme$subsystem", 00:15:36.520 "trtype": "$TEST_TRANSPORT", 00:15:36.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "$NVMF_PORT", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.520 "hdgst": ${hdgst:-false}, 00:15:36.520 "ddgst": ${ddgst:-false} 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 } 00:15:36.520 EOF 00:15:36.520 )") 00:15:36.520 02:32:17 -- nvmf/common.sh@544 -- # jq . 00:15:36.520 02:32:17 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:36.520 02:32:17 -- nvmf/common.sh@544 -- # jq . 00:15:36.520 02:32:17 -- nvmf/common.sh@520 -- # config=() 00:15:36.520 02:32:17 -- nvmf/common.sh@520 -- # local subsystem config 00:15:36.520 02:32:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:36.520 { 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme$subsystem", 00:15:36.520 "trtype": "$TEST_TRANSPORT", 00:15:36.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "$NVMF_PORT", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.520 "hdgst": ${hdgst:-false}, 00:15:36.520 "ddgst": ${ddgst:-false} 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 } 00:15:36.520 EOF 00:15:36.520 )") 00:15:36.520 02:32:17 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # cat 00:15:36.520 02:32:17 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.520 02:32:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme1", 00:15:36.520 "trtype": "tcp", 00:15:36.520 "traddr": "10.0.0.2", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "4420", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.520 "hdgst": false, 00:15:36.520 "ddgst": false 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 }' 00:15:36.520 02:32:17 -- nvmf/common.sh@542 -- # cat 00:15:36.520 02:32:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme1", 00:15:36.520 "trtype": "tcp", 00:15:36.520 "traddr": "10.0.0.2", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "4420", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.520 "hdgst": false, 00:15:36.520 "ddgst": false 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 }' 00:15:36.520 02:32:17 -- nvmf/common.sh@544 -- # jq . 00:15:36.520 02:32:17 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.520 02:32:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme1", 00:15:36.520 "trtype": "tcp", 00:15:36.520 "traddr": "10.0.0.2", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "4420", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.520 "hdgst": false, 00:15:36.520 "ddgst": false 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 }' 00:15:36.520 02:32:17 -- nvmf/common.sh@544 -- # jq . 00:15:36.520 02:32:17 -- nvmf/common.sh@545 -- # IFS=, 00:15:36.520 02:32:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:36.520 "params": { 00:15:36.520 "name": "Nvme1", 00:15:36.520 "trtype": "tcp", 00:15:36.520 "traddr": "10.0.0.2", 00:15:36.520 "adrfam": "ipv4", 00:15:36.520 "trsvcid": "4420", 00:15:36.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.520 "hdgst": false, 00:15:36.520 "ddgst": false 00:15:36.520 }, 00:15:36.520 "method": "bdev_nvme_attach_controller" 00:15:36.520 }' 00:15:36.520 [2024-11-21 02:32:17.145524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:36.520 [2024-11-21 02:32:17.146475] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:36.520 [2024-11-21 02:32:17.152401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:36.520 [2024-11-21 02:32:17.152476] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:36.520 [2024-11-21 02:32:17.162982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:36.520 [2024-11-21 02:32:17.163056] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:36.778 02:32:17 -- target/bdev_io_wait.sh@37 -- # wait 74295 00:15:36.778 [2024-11-21 02:32:17.174839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:36.778 [2024-11-21 02:32:17.174912] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:36.778 [2024-11-21 02:32:17.366062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.036 [2024-11-21 02:32:17.437112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.036 [2024-11-21 02:32:17.490677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:37.036 [2024-11-21 02:32:17.512233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.036 [2024-11-21 02:32:17.541085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:37.036 [2024-11-21 02:32:17.587145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.036 [2024-11-21 02:32:17.620108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:37.036 Running I/O for 1 seconds... 00:15:37.036 Running I/O for 1 seconds... 00:15:37.293 [2024-11-21 02:32:17.690153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:37.293 Running I/O for 1 seconds... 00:15:37.293 Running I/O for 1 seconds... 00:15:38.230 00:15:38.230 Latency(us) 00:15:38.230 [2024-11-21T02:32:18.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.230 [2024-11-21T02:32:18.877Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:38.230 Nvme1n1 : 1.00 237185.62 926.51 0.00 0.00 537.66 218.76 960.70 00:15:38.230 [2024-11-21T02:32:18.877Z] =================================================================================================================== 00:15:38.230 [2024-11-21T02:32:18.877Z] Total : 237185.62 926.51 0.00 0.00 537.66 218.76 960.70 00:15:38.230 00:15:38.230 Latency(us) 00:15:38.230 [2024-11-21T02:32:18.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.230 [2024-11-21T02:32:18.877Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:38.230 Nvme1n1 : 1.02 5202.83 20.32 0.00 0.00 24281.85 9234.62 42181.35 00:15:38.230 [2024-11-21T02:32:18.877Z] =================================================================================================================== 00:15:38.230 [2024-11-21T02:32:18.877Z] Total : 5202.83 20.32 0.00 0.00 24281.85 9234.62 42181.35 00:15:38.230 00:15:38.230 Latency(us) 00:15:38.230 [2024-11-21T02:32:18.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.230 [2024-11-21T02:32:18.877Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:38.230 Nvme1n1 : 1.01 4955.46 19.36 0.00 0.00 25731.20 7298.33 44802.79 00:15:38.230 [2024-11-21T02:32:18.877Z] =================================================================================================================== 00:15:38.230 [2024-11-21T02:32:18.877Z] Total : 4955.46 19.36 0.00 0.00 25731.20 7298.33 44802.79 00:15:38.230 00:15:38.230 Latency(us) 00:15:38.230 [2024-11-21T02:32:18.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.230 [2024-11-21T02:32:18.877Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:38.230 Nvme1n1 : 1.01 7145.53 27.91 0.00 0.00 17843.86 6940.86 30980.65 00:15:38.230 [2024-11-21T02:32:18.877Z] =================================================================================================================== 00:15:38.230 [2024-11-21T02:32:18.877Z] Total : 7145.53 27.91 0.00 0.00 17843.86 6940.86 30980.65 00:15:38.489 02:32:19 -- target/bdev_io_wait.sh@38 -- # wait 74297 00:15:38.750 02:32:19 -- target/bdev_io_wait.sh@39 -- # wait 74299 00:15:38.750 02:32:19 -- target/bdev_io_wait.sh@40 -- # wait 74302 00:15:38.750 02:32:19 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.750 02:32:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.750 02:32:19 -- common/autotest_common.sh@10 -- # set +x 00:15:38.750 02:32:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.750 02:32:19 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:38.750 02:32:19 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:38.750 02:32:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:38.750 02:32:19 -- nvmf/common.sh@116 -- # sync 00:15:38.750 02:32:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:38.750 02:32:19 -- nvmf/common.sh@119 -- # set +e 00:15:38.750 02:32:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:38.750 02:32:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:38.750 rmmod nvme_tcp 00:15:38.750 rmmod nvme_fabrics 00:15:38.750 rmmod nvme_keyring 00:15:38.750 02:32:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:38.750 02:32:19 -- nvmf/common.sh@123 -- # set -e 00:15:38.750 02:32:19 -- nvmf/common.sh@124 -- # return 0 00:15:38.750 02:32:19 -- nvmf/common.sh@477 -- # '[' -n 74241 ']' 00:15:38.750 02:32:19 -- nvmf/common.sh@478 -- # killprocess 74241 00:15:38.750 02:32:19 -- common/autotest_common.sh@936 -- # '[' -z 74241 ']' 00:15:38.750 02:32:19 -- common/autotest_common.sh@940 -- # kill -0 74241 00:15:38.750 02:32:19 -- common/autotest_common.sh@941 -- # uname 00:15:38.750 02:32:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.750 02:32:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74241 00:15:38.750 02:32:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.750 killing process with pid 74241 00:15:38.750 02:32:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.750 02:32:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74241' 00:15:38.750 02:32:19 -- common/autotest_common.sh@955 -- # kill 74241 00:15:38.750 02:32:19 -- common/autotest_common.sh@960 -- # wait 74241 00:15:39.009 02:32:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.009 02:32:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.009 02:32:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.009 02:32:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.009 02:32:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.009 02:32:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.009 02:32:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.009 02:32:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.269 02:32:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.269 00:15:39.269 real 0m4.414s 00:15:39.269 user 0m19.719s 00:15:39.269 sys 0m1.891s 00:15:39.269 02:32:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.269 ************************************ 00:15:39.269 02:32:19 -- common/autotest_common.sh@10 -- # set +x 00:15:39.269 END TEST nvmf_bdev_io_wait 00:15:39.269 ************************************ 00:15:39.269 02:32:19 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:39.269 02:32:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.269 02:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.269 02:32:19 -- common/autotest_common.sh@10 -- # set +x 00:15:39.269 ************************************ 00:15:39.269 START TEST nvmf_queue_depth 00:15:39.269 ************************************ 00:15:39.269 02:32:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:39.269 * Looking for test storage... 00:15:39.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.269 02:32:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:39.269 02:32:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:39.269 02:32:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:39.269 02:32:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:39.269 02:32:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:39.269 02:32:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:39.269 02:32:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:39.269 02:32:19 -- scripts/common.sh@335 -- # IFS=.-: 00:15:39.269 02:32:19 -- scripts/common.sh@335 -- # read -ra ver1 00:15:39.269 02:32:19 -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.269 02:32:19 -- scripts/common.sh@336 -- # read -ra ver2 00:15:39.269 02:32:19 -- scripts/common.sh@337 -- # local 'op=<' 00:15:39.269 02:32:19 -- scripts/common.sh@339 -- # ver1_l=2 00:15:39.269 02:32:19 -- scripts/common.sh@340 -- # ver2_l=1 00:15:39.269 02:32:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:39.269 02:32:19 -- scripts/common.sh@343 -- # case "$op" in 00:15:39.269 02:32:19 -- scripts/common.sh@344 -- # : 1 00:15:39.269 02:32:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:39.269 02:32:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.269 02:32:19 -- scripts/common.sh@364 -- # decimal 1 00:15:39.269 02:32:19 -- scripts/common.sh@352 -- # local d=1 00:15:39.269 02:32:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.269 02:32:19 -- scripts/common.sh@354 -- # echo 1 00:15:39.269 02:32:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:39.269 02:32:19 -- scripts/common.sh@365 -- # decimal 2 00:15:39.269 02:32:19 -- scripts/common.sh@352 -- # local d=2 00:15:39.269 02:32:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.269 02:32:19 -- scripts/common.sh@354 -- # echo 2 00:15:39.269 02:32:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:39.269 02:32:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:39.528 02:32:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:39.528 02:32:19 -- scripts/common.sh@367 -- # return 0 00:15:39.528 02:32:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.528 02:32:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.528 --rc genhtml_branch_coverage=1 00:15:39.528 --rc genhtml_function_coverage=1 00:15:39.528 --rc genhtml_legend=1 00:15:39.528 --rc geninfo_all_blocks=1 00:15:39.528 --rc geninfo_unexecuted_blocks=1 00:15:39.528 00:15:39.528 ' 00:15:39.528 02:32:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.528 --rc genhtml_branch_coverage=1 00:15:39.528 --rc genhtml_function_coverage=1 00:15:39.528 --rc genhtml_legend=1 00:15:39.528 --rc geninfo_all_blocks=1 00:15:39.528 --rc geninfo_unexecuted_blocks=1 00:15:39.528 00:15:39.528 ' 00:15:39.528 02:32:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.528 --rc genhtml_branch_coverage=1 00:15:39.528 --rc genhtml_function_coverage=1 00:15:39.528 --rc genhtml_legend=1 00:15:39.528 --rc geninfo_all_blocks=1 00:15:39.528 --rc geninfo_unexecuted_blocks=1 00:15:39.528 00:15:39.528 ' 00:15:39.528 02:32:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.528 --rc genhtml_branch_coverage=1 00:15:39.528 --rc genhtml_function_coverage=1 00:15:39.528 --rc genhtml_legend=1 00:15:39.528 --rc geninfo_all_blocks=1 00:15:39.528 --rc geninfo_unexecuted_blocks=1 00:15:39.528 00:15:39.528 ' 00:15:39.528 02:32:19 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.528 02:32:19 -- nvmf/common.sh@7 -- # uname -s 00:15:39.528 02:32:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.528 02:32:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.528 02:32:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.528 02:32:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.528 02:32:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.528 02:32:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.528 02:32:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.528 02:32:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.528 02:32:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.528 02:32:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.528 02:32:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:15:39.528 02:32:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:15:39.528 02:32:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.528 02:32:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.528 02:32:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.528 02:32:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.528 02:32:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.528 02:32:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.528 02:32:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.528 02:32:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.528 02:32:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.528 02:32:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.528 02:32:19 -- paths/export.sh@5 -- # export PATH 00:15:39.528 02:32:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.528 02:32:19 -- nvmf/common.sh@46 -- # : 0 00:15:39.528 02:32:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.528 02:32:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.528 02:32:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.528 02:32:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.528 02:32:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.528 02:32:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.528 02:32:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.528 02:32:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.528 02:32:19 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:39.529 02:32:19 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:39.529 02:32:19 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:39.529 02:32:19 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:39.529 02:32:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.529 02:32:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.529 02:32:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.529 02:32:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.529 02:32:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.529 02:32:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.529 02:32:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.529 02:32:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.529 02:32:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:39.529 02:32:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:39.529 02:32:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:39.529 02:32:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:39.529 02:32:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:39.529 02:32:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:39.529 02:32:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.529 02:32:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.529 02:32:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.529 02:32:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:39.529 02:32:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.529 02:32:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.529 02:32:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.529 02:32:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.529 02:32:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.529 02:32:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.529 02:32:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.529 02:32:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.529 02:32:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:39.529 02:32:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:39.529 Cannot find device "nvmf_tgt_br" 00:15:39.529 02:32:19 -- nvmf/common.sh@154 -- # true 00:15:39.529 02:32:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.529 Cannot find device "nvmf_tgt_br2" 00:15:39.529 02:32:19 -- nvmf/common.sh@155 -- # true 00:15:39.529 02:32:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:39.529 02:32:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:39.529 Cannot find device "nvmf_tgt_br" 00:15:39.529 02:32:20 -- nvmf/common.sh@157 -- # true 00:15:39.529 02:32:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:39.529 Cannot find device "nvmf_tgt_br2" 00:15:39.529 02:32:20 -- nvmf/common.sh@158 -- # true 00:15:39.529 02:32:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:39.529 02:32:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:39.529 02:32:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.529 02:32:20 -- nvmf/common.sh@161 -- # true 00:15:39.529 02:32:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.529 02:32:20 -- nvmf/common.sh@162 -- # true 00:15:39.529 02:32:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.529 02:32:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.529 02:32:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.529 02:32:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.529 02:32:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.529 02:32:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.529 02:32:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.529 02:32:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:39.529 02:32:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:39.529 02:32:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:39.529 02:32:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:39.529 02:32:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:39.529 02:32:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:39.529 02:32:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.788 02:32:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.788 02:32:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.788 02:32:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:39.788 02:32:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:39.788 02:32:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.788 02:32:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.788 02:32:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.788 02:32:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.788 02:32:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.788 02:32:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:39.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:15:39.788 00:15:39.788 --- 10.0.0.2 ping statistics --- 00:15:39.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.788 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:39.788 02:32:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:39.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:15:39.788 00:15:39.788 --- 10.0.0.3 ping statistics --- 00:15:39.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.788 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:39.788 02:32:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:39.788 00:15:39.788 --- 10.0.0.1 ping statistics --- 00:15:39.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.788 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:39.788 02:32:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.788 02:32:20 -- nvmf/common.sh@421 -- # return 0 00:15:39.788 02:32:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:39.788 02:32:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.788 02:32:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:39.788 02:32:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:39.788 02:32:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.788 02:32:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:39.788 02:32:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:39.788 02:32:20 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:39.788 02:32:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:39.788 02:32:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.788 02:32:20 -- common/autotest_common.sh@10 -- # set +x 00:15:39.788 02:32:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.788 02:32:20 -- nvmf/common.sh@469 -- # nvmfpid=74539 00:15:39.788 02:32:20 -- nvmf/common.sh@470 -- # waitforlisten 74539 00:15:39.788 02:32:20 -- common/autotest_common.sh@829 -- # '[' -z 74539 ']' 00:15:39.789 02:32:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.789 02:32:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.789 02:32:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.789 02:32:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.789 02:32:20 -- common/autotest_common.sh@10 -- # set +x 00:15:39.789 [2024-11-21 02:32:20.352354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:39.789 [2024-11-21 02:32:20.352442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.047 [2024-11-21 02:32:20.491291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.047 [2024-11-21 02:32:20.590923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.047 [2024-11-21 02:32:20.591066] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.048 [2024-11-21 02:32:20.591095] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.048 [2024-11-21 02:32:20.591105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.048 [2024-11-21 02:32:20.591155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.984 02:32:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.984 02:32:21 -- common/autotest_common.sh@862 -- # return 0 00:15:40.984 02:32:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:40.984 02:32:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 02:32:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.984 02:32:21 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.984 02:32:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 [2024-11-21 02:32:21.422737] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.984 02:32:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.984 02:32:21 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.984 02:32:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 Malloc0 00:15:40.984 02:32:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.984 02:32:21 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.984 02:32:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 02:32:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.984 02:32:21 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.984 02:32:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 02:32:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.984 02:32:21 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.984 02:32:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 [2024-11-21 02:32:21.495996] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.984 02:32:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.984 02:32:21 -- target/queue_depth.sh@30 -- # bdevperf_pid=74589 00:15:40.984 02:32:21 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:40.984 02:32:21 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.984 02:32:21 -- target/queue_depth.sh@33 -- # waitforlisten 74589 /var/tmp/bdevperf.sock 00:15:40.984 02:32:21 -- common/autotest_common.sh@829 -- # '[' -z 74589 ']' 00:15:40.984 02:32:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:40.984 02:32:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.984 02:32:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:40.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:40.984 02:32:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.984 02:32:21 -- common/autotest_common.sh@10 -- # set +x 00:15:40.984 [2024-11-21 02:32:21.558271] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:40.984 [2024-11-21 02:32:21.558351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74589 ] 00:15:41.243 [2024-11-21 02:32:21.697366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.243 [2024-11-21 02:32:21.801562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.178 02:32:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.178 02:32:22 -- common/autotest_common.sh@862 -- # return 0 00:15:42.178 02:32:22 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:42.178 02:32:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.178 02:32:22 -- common/autotest_common.sh@10 -- # set +x 00:15:42.178 NVMe0n1 00:15:42.178 02:32:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.178 02:32:22 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.178 Running I/O for 10 seconds... 00:15:52.151 00:15:52.151 Latency(us) 00:15:52.151 [2024-11-21T02:32:32.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.151 [2024-11-21T02:32:32.798Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:52.151 Verification LBA range: start 0x0 length 0x4000 00:15:52.151 NVMe0n1 : 10.05 16750.51 65.43 0.00 0.00 60943.55 11379.43 70063.94 00:15:52.151 [2024-11-21T02:32:32.798Z] =================================================================================================================== 00:15:52.151 [2024-11-21T02:32:32.798Z] Total : 16750.51 65.43 0.00 0.00 60943.55 11379.43 70063.94 00:15:52.151 0 00:15:52.151 02:32:32 -- target/queue_depth.sh@39 -- # killprocess 74589 00:15:52.152 02:32:32 -- common/autotest_common.sh@936 -- # '[' -z 74589 ']' 00:15:52.152 02:32:32 -- common/autotest_common.sh@940 -- # kill -0 74589 00:15:52.152 02:32:32 -- common/autotest_common.sh@941 -- # uname 00:15:52.152 02:32:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.152 02:32:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74589 00:15:52.152 02:32:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.152 02:32:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.152 killing process with pid 74589 00:15:52.152 02:32:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74589' 00:15:52.152 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.152 00:15:52.152 Latency(us) 00:15:52.152 [2024-11-21T02:32:32.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.152 [2024-11-21T02:32:32.799Z] =================================================================================================================== 00:15:52.152 [2024-11-21T02:32:32.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.152 02:32:32 -- common/autotest_common.sh@955 -- # kill 74589 00:15:52.152 02:32:32 -- common/autotest_common.sh@960 -- # wait 74589 00:15:52.410 02:32:33 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:52.410 02:32:33 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:52.410 02:32:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:52.410 02:32:33 -- nvmf/common.sh@116 -- # sync 00:15:52.670 02:32:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:52.670 02:32:33 -- nvmf/common.sh@119 -- # set +e 00:15:52.670 02:32:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:52.670 02:32:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:52.670 rmmod nvme_tcp 00:15:52.670 rmmod nvme_fabrics 00:15:52.670 rmmod nvme_keyring 00:15:52.670 02:32:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.670 02:32:33 -- nvmf/common.sh@123 -- # set -e 00:15:52.670 02:32:33 -- nvmf/common.sh@124 -- # return 0 00:15:52.670 02:32:33 -- nvmf/common.sh@477 -- # '[' -n 74539 ']' 00:15:52.670 02:32:33 -- nvmf/common.sh@478 -- # killprocess 74539 00:15:52.670 02:32:33 -- common/autotest_common.sh@936 -- # '[' -z 74539 ']' 00:15:52.670 02:32:33 -- common/autotest_common.sh@940 -- # kill -0 74539 00:15:52.670 02:32:33 -- common/autotest_common.sh@941 -- # uname 00:15:52.670 02:32:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.670 02:32:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74539 00:15:52.670 02:32:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:52.670 02:32:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:52.670 killing process with pid 74539 00:15:52.670 02:32:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74539' 00:15:52.670 02:32:33 -- common/autotest_common.sh@955 -- # kill 74539 00:15:52.670 02:32:33 -- common/autotest_common.sh@960 -- # wait 74539 00:15:52.929 02:32:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:52.929 02:32:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:52.929 02:32:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:52.929 02:32:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.929 02:32:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:52.929 02:32:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.929 02:32:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.929 02:32:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.929 02:32:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:52.929 00:15:52.929 real 0m13.718s 00:15:52.929 user 0m22.696s 00:15:52.929 sys 0m2.619s 00:15:52.929 02:32:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:52.929 ************************************ 00:15:52.929 END TEST nvmf_queue_depth 00:15:52.929 ************************************ 00:15:52.929 02:32:33 -- common/autotest_common.sh@10 -- # set +x 00:15:52.929 02:32:33 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:52.929 02:32:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.929 02:32:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.929 02:32:33 -- common/autotest_common.sh@10 -- # set +x 00:15:52.929 ************************************ 00:15:52.929 START TEST nvmf_multipath 00:15:52.929 ************************************ 00:15:52.929 02:32:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.189 * Looking for test storage... 00:15:53.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.189 02:32:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:53.189 02:32:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:53.189 02:32:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:53.189 02:32:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:53.189 02:32:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:53.189 02:32:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:53.189 02:32:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:53.189 02:32:33 -- scripts/common.sh@335 -- # IFS=.-: 00:15:53.189 02:32:33 -- scripts/common.sh@335 -- # read -ra ver1 00:15:53.189 02:32:33 -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.189 02:32:33 -- scripts/common.sh@336 -- # read -ra ver2 00:15:53.189 02:32:33 -- scripts/common.sh@337 -- # local 'op=<' 00:15:53.189 02:32:33 -- scripts/common.sh@339 -- # ver1_l=2 00:15:53.189 02:32:33 -- scripts/common.sh@340 -- # ver2_l=1 00:15:53.189 02:32:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:53.189 02:32:33 -- scripts/common.sh@343 -- # case "$op" in 00:15:53.189 02:32:33 -- scripts/common.sh@344 -- # : 1 00:15:53.189 02:32:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:53.189 02:32:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.189 02:32:33 -- scripts/common.sh@364 -- # decimal 1 00:15:53.189 02:32:33 -- scripts/common.sh@352 -- # local d=1 00:15:53.189 02:32:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.189 02:32:33 -- scripts/common.sh@354 -- # echo 1 00:15:53.189 02:32:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:53.189 02:32:33 -- scripts/common.sh@365 -- # decimal 2 00:15:53.189 02:32:33 -- scripts/common.sh@352 -- # local d=2 00:15:53.189 02:32:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.189 02:32:33 -- scripts/common.sh@354 -- # echo 2 00:15:53.189 02:32:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:53.189 02:32:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:53.189 02:32:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:53.189 02:32:33 -- scripts/common.sh@367 -- # return 0 00:15:53.189 02:32:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.189 02:32:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.189 --rc genhtml_branch_coverage=1 00:15:53.189 --rc genhtml_function_coverage=1 00:15:53.189 --rc genhtml_legend=1 00:15:53.189 --rc geninfo_all_blocks=1 00:15:53.189 --rc geninfo_unexecuted_blocks=1 00:15:53.189 00:15:53.189 ' 00:15:53.189 02:32:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.189 --rc genhtml_branch_coverage=1 00:15:53.189 --rc genhtml_function_coverage=1 00:15:53.189 --rc genhtml_legend=1 00:15:53.189 --rc geninfo_all_blocks=1 00:15:53.189 --rc geninfo_unexecuted_blocks=1 00:15:53.189 00:15:53.189 ' 00:15:53.189 02:32:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.189 --rc genhtml_branch_coverage=1 00:15:53.189 --rc genhtml_function_coverage=1 00:15:53.189 --rc genhtml_legend=1 00:15:53.189 --rc geninfo_all_blocks=1 00:15:53.189 --rc geninfo_unexecuted_blocks=1 00:15:53.189 00:15:53.189 ' 00:15:53.189 02:32:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:53.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.189 --rc genhtml_branch_coverage=1 00:15:53.189 --rc genhtml_function_coverage=1 00:15:53.189 --rc genhtml_legend=1 00:15:53.189 --rc geninfo_all_blocks=1 00:15:53.189 --rc geninfo_unexecuted_blocks=1 00:15:53.189 00:15:53.189 ' 00:15:53.189 02:32:33 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.189 02:32:33 -- nvmf/common.sh@7 -- # uname -s 00:15:53.189 02:32:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.189 02:32:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.189 02:32:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.189 02:32:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.189 02:32:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.189 02:32:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.189 02:32:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.189 02:32:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.189 02:32:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.189 02:32:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.189 02:32:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:15:53.189 02:32:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:15:53.189 02:32:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.189 02:32:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.189 02:32:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.189 02:32:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.189 02:32:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.189 02:32:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.189 02:32:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.189 02:32:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.190 02:32:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.190 02:32:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.190 02:32:33 -- paths/export.sh@5 -- # export PATH 00:15:53.190 02:32:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.190 02:32:33 -- nvmf/common.sh@46 -- # : 0 00:15:53.190 02:32:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:53.190 02:32:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:53.190 02:32:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:53.190 02:32:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.190 02:32:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.190 02:32:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:53.190 02:32:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:53.190 02:32:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:53.190 02:32:33 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.190 02:32:33 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.190 02:32:33 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:53.190 02:32:33 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.190 02:32:33 -- target/multipath.sh@43 -- # nvmftestinit 00:15:53.190 02:32:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:53.190 02:32:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.190 02:32:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:53.190 02:32:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:53.190 02:32:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:53.190 02:32:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.190 02:32:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.190 02:32:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.190 02:32:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:53.190 02:32:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:53.190 02:32:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:53.190 02:32:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:53.190 02:32:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:53.190 02:32:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:53.190 02:32:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.190 02:32:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.190 02:32:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.190 02:32:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:53.190 02:32:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.190 02:32:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.190 02:32:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.190 02:32:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.190 02:32:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.190 02:32:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.190 02:32:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.190 02:32:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.190 02:32:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:53.190 02:32:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:53.190 Cannot find device "nvmf_tgt_br" 00:15:53.190 02:32:33 -- nvmf/common.sh@154 -- # true 00:15:53.190 02:32:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.190 Cannot find device "nvmf_tgt_br2" 00:15:53.190 02:32:33 -- nvmf/common.sh@155 -- # true 00:15:53.190 02:32:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:53.190 02:32:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:53.190 Cannot find device "nvmf_tgt_br" 00:15:53.190 02:32:33 -- nvmf/common.sh@157 -- # true 00:15:53.190 02:32:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:53.190 Cannot find device "nvmf_tgt_br2" 00:15:53.190 02:32:33 -- nvmf/common.sh@158 -- # true 00:15:53.190 02:32:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:53.449 02:32:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:53.449 02:32:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.449 02:32:33 -- nvmf/common.sh@161 -- # true 00:15:53.449 02:32:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.449 02:32:33 -- nvmf/common.sh@162 -- # true 00:15:53.449 02:32:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.449 02:32:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.449 02:32:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.449 02:32:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.449 02:32:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.449 02:32:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.449 02:32:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.449 02:32:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.449 02:32:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.449 02:32:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:53.449 02:32:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:53.449 02:32:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:53.449 02:32:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:53.449 02:32:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.449 02:32:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.449 02:32:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.449 02:32:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:53.449 02:32:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:53.449 02:32:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.449 02:32:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.449 02:32:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.449 02:32:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.449 02:32:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.449 02:32:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:53.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:53.449 00:15:53.449 --- 10.0.0.2 ping statistics --- 00:15:53.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.449 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:53.449 02:32:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:53.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:53.449 00:15:53.449 --- 10.0.0.3 ping statistics --- 00:15:53.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.449 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:53.449 02:32:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:53.449 00:15:53.449 --- 10.0.0.1 ping statistics --- 00:15:53.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.449 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:53.449 02:32:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.449 02:32:34 -- nvmf/common.sh@421 -- # return 0 00:15:53.449 02:32:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:53.449 02:32:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.449 02:32:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:53.449 02:32:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:53.449 02:32:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.449 02:32:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:53.449 02:32:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:53.449 02:32:34 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:53.449 02:32:34 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:53.449 02:32:34 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:53.449 02:32:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:53.449 02:32:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:53.449 02:32:34 -- common/autotest_common.sh@10 -- # set +x 00:15:53.708 02:32:34 -- nvmf/common.sh@469 -- # nvmfpid=74929 00:15:53.708 02:32:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:53.708 02:32:34 -- nvmf/common.sh@470 -- # waitforlisten 74929 00:15:53.708 02:32:34 -- common/autotest_common.sh@829 -- # '[' -z 74929 ']' 00:15:53.708 02:32:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.708 02:32:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.708 02:32:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.708 02:32:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.708 02:32:34 -- common/autotest_common.sh@10 -- # set +x 00:15:53.708 [2024-11-21 02:32:34.145086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:53.708 [2024-11-21 02:32:34.145148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.708 [2024-11-21 02:32:34.279313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.967 [2024-11-21 02:32:34.366901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.967 [2024-11-21 02:32:34.367053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.967 [2024-11-21 02:32:34.367065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.967 [2024-11-21 02:32:34.367073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.967 [2024-11-21 02:32:34.367242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.967 [2024-11-21 02:32:34.367389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.967 [2024-11-21 02:32:34.367893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.967 [2024-11-21 02:32:34.367900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.534 02:32:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.534 02:32:35 -- common/autotest_common.sh@862 -- # return 0 00:15:54.534 02:32:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.534 02:32:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.534 02:32:35 -- common/autotest_common.sh@10 -- # set +x 00:15:54.793 02:32:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.793 02:32:35 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:55.052 [2024-11-21 02:32:35.477737] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.052 02:32:35 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:55.311 Malloc0 00:15:55.311 02:32:35 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:55.569 02:32:35 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.570 02:32:36 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.828 [2024-11-21 02:32:36.390075] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.828 02:32:36 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.087 [2024-11-21 02:32:36.602269] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.087 02:32:36 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:56.345 02:32:36 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:56.604 02:32:37 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.604 02:32:37 -- common/autotest_common.sh@1187 -- # local i=0 00:15:56.604 02:32:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.604 02:32:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:56.604 02:32:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:58.507 02:32:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:58.507 02:32:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:58.507 02:32:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.507 02:32:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:58.507 02:32:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.507 02:32:39 -- common/autotest_common.sh@1197 -- # return 0 00:15:58.507 02:32:39 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:58.507 02:32:39 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:58.507 02:32:39 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:58.507 02:32:39 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:58.507 02:32:39 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:58.507 02:32:39 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:58.507 02:32:39 -- target/multipath.sh@38 -- # return 0 00:15:58.507 02:32:39 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:58.507 02:32:39 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:58.507 02:32:39 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:58.507 02:32:39 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:58.507 02:32:39 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:58.507 02:32:39 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:58.507 02:32:39 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:58.507 02:32:39 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:58.507 02:32:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:58.507 02:32:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:58.507 02:32:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:58.507 02:32:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:58.507 02:32:39 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:58.507 02:32:39 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:58.507 02:32:39 -- target/multipath.sh@22 -- # local timeout=20 00:15:58.507 02:32:39 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:58.507 02:32:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:58.507 02:32:39 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:58.507 02:32:39 -- target/multipath.sh@85 -- # echo numa 00:15:58.507 02:32:39 -- target/multipath.sh@88 -- # fio_pid=75072 00:15:58.507 02:32:39 -- target/multipath.sh@90 -- # sleep 1 00:15:58.507 02:32:39 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:58.507 [global] 00:15:58.507 thread=1 00:15:58.507 invalidate=1 00:15:58.507 rw=randrw 00:15:58.507 time_based=1 00:15:58.507 runtime=6 00:15:58.507 ioengine=libaio 00:15:58.507 direct=1 00:15:58.507 bs=4096 00:15:58.507 iodepth=128 00:15:58.507 norandommap=0 00:15:58.507 numjobs=1 00:15:58.507 00:15:58.507 verify_dump=1 00:15:58.507 verify_backlog=512 00:15:58.507 verify_state_save=0 00:15:58.507 do_verify=1 00:15:58.507 verify=crc32c-intel 00:15:58.507 [job0] 00:15:58.507 filename=/dev/nvme0n1 00:15:58.507 Could not set queue depth (nvme0n1) 00:15:58.765 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.765 fio-3.35 00:15:58.765 Starting 1 thread 00:15:59.701 02:32:40 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:59.960 02:32:40 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:59.960 02:32:40 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:59.960 02:32:40 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:59.960 02:32:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.960 02:32:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:59.960 02:32:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:59.960 02:32:40 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:59.960 02:32:40 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:59.960 02:32:40 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:59.960 02:32:40 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.960 02:32:40 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:59.960 02:32:40 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:59.960 02:32:40 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:59.960 02:32:40 -- target/multipath.sh@25 -- # sleep 1s 00:16:01.337 02:32:41 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:01.337 02:32:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:01.337 02:32:41 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:01.337 02:32:41 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:01.337 02:32:41 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:01.596 02:32:42 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:01.596 02:32:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:01.596 02:32:42 -- target/multipath.sh@22 -- # local timeout=20 00:16:01.596 02:32:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:01.596 02:32:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:01.596 02:32:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:01.596 02:32:42 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:01.596 02:32:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:01.596 02:32:42 -- target/multipath.sh@22 -- # local timeout=20 00:16:01.596 02:32:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:01.596 02:32:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:01.596 02:32:42 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:01.596 02:32:42 -- target/multipath.sh@25 -- # sleep 1s 00:16:02.532 02:32:43 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:02.532 02:32:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:02.532 02:32:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:02.532 02:32:43 -- target/multipath.sh@104 -- # wait 75072 00:16:05.110 00:16:05.110 job0: (groupid=0, jobs=1): err= 0: pid=75093: Thu Nov 21 02:32:45 2024 00:16:05.110 read: IOPS=13.0k, BW=51.0MiB/s (53.4MB/s)(306MiB/6001msec) 00:16:05.110 slat (usec): min=2, max=13546, avg=43.23, stdev=200.45 00:16:05.110 clat (usec): min=714, max=23593, avg=6740.99, stdev=1222.97 00:16:05.110 lat (usec): min=786, max=23606, avg=6784.23, stdev=1229.76 00:16:05.110 clat percentiles (usec): 00:16:05.110 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5538], 20.00th=[ 5866], 00:16:05.110 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6915], 00:16:05.110 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7963], 95.00th=[ 8586], 00:16:05.110 | 99.00th=[10159], 99.50th=[10683], 99.90th=[21103], 99.95th=[22676], 00:16:05.110 | 99.99th=[23200] 00:16:05.110 bw ( KiB/s): min=14400, max=32600, per=52.48%, avg=27386.91, stdev=6094.12, samples=11 00:16:05.110 iops : min= 3600, max= 8150, avg=6846.73, stdev=1523.53, samples=11 00:16:05.110 write: IOPS=7521, BW=29.4MiB/s (30.8MB/s)(155MiB/5272msec); 0 zone resets 00:16:05.110 slat (usec): min=4, max=2934, avg=55.80, stdev=134.92 00:16:05.110 clat (usec): min=693, max=23045, avg=5861.70, stdev=975.75 00:16:05.110 lat (usec): min=752, max=23190, avg=5917.50, stdev=978.08 00:16:05.110 clat percentiles (usec): 00:16:05.110 | 1.00th=[ 3294], 5.00th=[ 4146], 10.00th=[ 4883], 20.00th=[ 5276], 00:16:05.110 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6063], 00:16:05.110 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6783], 95.00th=[ 7177], 00:16:05.110 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[10814], 99.95th=[11207], 00:16:05.110 | 99.99th=[22938] 00:16:05.110 bw ( KiB/s): min=15120, max=32272, per=90.90%, avg=27350.55, stdev=5747.42, samples=11 00:16:05.110 iops : min= 3780, max= 8068, avg=6837.64, stdev=1436.86, samples=11 00:16:05.111 lat (usec) : 750=0.01%, 1000=0.01% 00:16:05.111 lat (msec) : 2=0.03%, 4=1.82%, 10=97.29%, 20=0.76%, 50=0.10% 00:16:05.111 cpu : usr=6.11%, sys=25.02%, ctx=7154, majf=0, minf=90 00:16:05.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:05.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:05.111 issued rwts: total=78291,39655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:05.111 00:16:05.111 Run status group 0 (all jobs): 00:16:05.111 READ: bw=51.0MiB/s (53.4MB/s), 51.0MiB/s-51.0MiB/s (53.4MB/s-53.4MB/s), io=306MiB (321MB), run=6001-6001msec 00:16:05.111 WRITE: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=155MiB (162MB), run=5272-5272msec 00:16:05.111 00:16:05.111 Disk stats (read/write): 00:16:05.111 nvme0n1: ios=77235/38826, merge=0/0, ticks=482670/210425, in_queue=693095, util=98.65% 00:16:05.111 02:32:45 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:05.111 02:32:45 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:05.370 02:32:45 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:05.370 02:32:45 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:05.370 02:32:45 -- target/multipath.sh@22 -- # local timeout=20 00:16:05.370 02:32:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:05.370 02:32:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:05.370 02:32:45 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:05.370 02:32:45 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:05.370 02:32:45 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:05.370 02:32:45 -- target/multipath.sh@22 -- # local timeout=20 00:16:05.370 02:32:45 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:05.370 02:32:45 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:05.370 02:32:45 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:16:05.370 02:32:45 -- target/multipath.sh@25 -- # sleep 1s 00:16:06.747 02:32:46 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:06.747 02:32:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:06.747 02:32:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:06.747 02:32:46 -- target/multipath.sh@113 -- # echo round-robin 00:16:06.747 02:32:46 -- target/multipath.sh@116 -- # fio_pid=75224 00:16:06.748 02:32:46 -- target/multipath.sh@118 -- # sleep 1 00:16:06.748 02:32:46 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:06.748 [global] 00:16:06.748 thread=1 00:16:06.748 invalidate=1 00:16:06.748 rw=randrw 00:16:06.748 time_based=1 00:16:06.748 runtime=6 00:16:06.748 ioengine=libaio 00:16:06.748 direct=1 00:16:06.748 bs=4096 00:16:06.748 iodepth=128 00:16:06.748 norandommap=0 00:16:06.748 numjobs=1 00:16:06.748 00:16:06.748 verify_dump=1 00:16:06.748 verify_backlog=512 00:16:06.748 verify_state_save=0 00:16:06.748 do_verify=1 00:16:06.748 verify=crc32c-intel 00:16:06.748 [job0] 00:16:06.748 filename=/dev/nvme0n1 00:16:06.748 Could not set queue depth (nvme0n1) 00:16:06.748 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.748 fio-3.35 00:16:06.748 Starting 1 thread 00:16:07.685 02:32:47 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:07.685 02:32:48 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:07.945 02:32:48 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:07.945 02:32:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:07.945 02:32:48 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.945 02:32:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:07.945 02:32:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:07.945 02:32:48 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:07.945 02:32:48 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:07.945 02:32:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:07.945 02:32:48 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.945 02:32:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:07.945 02:32:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:07.945 02:32:48 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:07.945 02:32:48 -- target/multipath.sh@25 -- # sleep 1s 00:16:08.881 02:32:49 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:08.881 02:32:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:08.881 02:32:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:08.881 02:32:49 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:09.449 02:32:49 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:09.449 02:32:50 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:09.449 02:32:50 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:09.449 02:32:50 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.449 02:32:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:09.449 02:32:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:09.449 02:32:50 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:09.449 02:32:50 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:09.449 02:32:50 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:09.449 02:32:50 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.449 02:32:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:09.449 02:32:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:09.449 02:32:50 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:09.449 02:32:50 -- target/multipath.sh@25 -- # sleep 1s 00:16:10.825 02:32:51 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:10.825 02:32:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:10.825 02:32:51 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:10.825 02:32:51 -- target/multipath.sh@132 -- # wait 75224 00:16:12.728 00:16:12.728 job0: (groupid=0, jobs=1): err= 0: pid=75245: Thu Nov 21 02:32:53 2024 00:16:12.728 read: IOPS=13.0k, BW=50.9MiB/s (53.3MB/s)(306MiB/6005msec) 00:16:12.728 slat (usec): min=4, max=5827, avg=38.37, stdev=184.34 00:16:12.728 clat (usec): min=520, max=17152, avg=6811.44, stdev=1428.85 00:16:12.728 lat (usec): min=532, max=17158, avg=6849.81, stdev=1433.48 00:16:12.728 clat percentiles (usec): 00:16:12.728 | 1.00th=[ 2868], 5.00th=[ 4555], 10.00th=[ 5407], 20.00th=[ 5997], 00:16:12.728 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 6980], 00:16:12.729 | 70.00th=[ 7308], 80.00th=[ 7701], 90.00th=[ 8455], 95.00th=[ 9241], 00:16:12.729 | 99.00th=[11076], 99.50th=[11863], 99.90th=[13698], 99.95th=[14484], 00:16:12.729 | 99.99th=[15795] 00:16:12.729 bw ( KiB/s): min= 8712, max=37784, per=52.69%, avg=27452.45, stdev=9160.06, samples=11 00:16:12.729 iops : min= 2178, max= 9446, avg=6863.00, stdev=2289.96, samples=11 00:16:12.729 write: IOPS=7757, BW=30.3MiB/s (31.8MB/s)(154MiB/5070msec); 0 zone resets 00:16:12.729 slat (usec): min=11, max=2202, avg=49.40, stdev=124.31 00:16:12.729 clat (usec): min=432, max=12490, avg=5833.69, stdev=1169.40 00:16:12.729 lat (usec): min=493, max=12507, avg=5883.09, stdev=1173.75 00:16:12.729 clat percentiles (usec): 00:16:12.729 | 1.00th=[ 2769], 5.00th=[ 3490], 10.00th=[ 4113], 20.00th=[ 5145], 00:16:12.729 | 30.00th=[ 5538], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6194], 00:16:12.729 | 70.00th=[ 6325], 80.00th=[ 6587], 90.00th=[ 6980], 95.00th=[ 7439], 00:16:12.729 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10814], 99.95th=[11338], 00:16:12.729 | 99.99th=[12256] 00:16:12.729 bw ( KiB/s): min= 9072, max=36904, per=88.46%, avg=27449.00, stdev=8955.83, samples=11 00:16:12.729 iops : min= 2268, max= 9226, avg=6862.18, stdev=2238.92, samples=11 00:16:12.729 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:16:12.729 lat (msec) : 2=0.20%, 4=4.90%, 10=92.98%, 20=1.91% 00:16:12.729 cpu : usr=5.75%, sys=22.87%, ctx=7184, majf=0, minf=127 00:16:12.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:12.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:12.729 issued rwts: total=78210,39331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:12.729 00:16:12.729 Run status group 0 (all jobs): 00:16:12.729 READ: bw=50.9MiB/s (53.3MB/s), 50.9MiB/s-50.9MiB/s (53.3MB/s-53.3MB/s), io=306MiB (320MB), run=6005-6005msec 00:16:12.729 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=154MiB (161MB), run=5070-5070msec 00:16:12.729 00:16:12.729 Disk stats (read/write): 00:16:12.729 nvme0n1: ios=76779/38917, merge=0/0, ticks=491820/212195, in_queue=704015, util=98.63% 00:16:12.729 02:32:53 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:12.987 02:32:53 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.987 02:32:53 -- common/autotest_common.sh@1208 -- # local i=0 00:16:12.987 02:32:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:12.987 02:32:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.987 02:32:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.987 02:32:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:12.987 02:32:53 -- common/autotest_common.sh@1220 -- # return 0 00:16:12.987 02:32:53 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.246 02:32:53 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:13.246 02:32:53 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:13.246 02:32:53 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:13.246 02:32:53 -- target/multipath.sh@144 -- # nvmftestfini 00:16:13.246 02:32:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.246 02:32:53 -- nvmf/common.sh@116 -- # sync 00:16:13.246 02:32:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.246 02:32:53 -- nvmf/common.sh@119 -- # set +e 00:16:13.246 02:32:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.246 02:32:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.246 rmmod nvme_tcp 00:16:13.246 rmmod nvme_fabrics 00:16:13.246 rmmod nvme_keyring 00:16:13.505 02:32:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.505 02:32:53 -- nvmf/common.sh@123 -- # set -e 00:16:13.505 02:32:53 -- nvmf/common.sh@124 -- # return 0 00:16:13.505 02:32:53 -- nvmf/common.sh@477 -- # '[' -n 74929 ']' 00:16:13.505 02:32:53 -- nvmf/common.sh@478 -- # killprocess 74929 00:16:13.505 02:32:53 -- common/autotest_common.sh@936 -- # '[' -z 74929 ']' 00:16:13.505 02:32:53 -- common/autotest_common.sh@940 -- # kill -0 74929 00:16:13.505 02:32:53 -- common/autotest_common.sh@941 -- # uname 00:16:13.505 02:32:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.505 02:32:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74929 00:16:13.505 killing process with pid 74929 00:16:13.505 02:32:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.505 02:32:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.505 02:32:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74929' 00:16:13.505 02:32:53 -- common/autotest_common.sh@955 -- # kill 74929 00:16:13.505 02:32:53 -- common/autotest_common.sh@960 -- # wait 74929 00:16:13.764 02:32:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.764 02:32:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.764 02:32:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.764 02:32:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.764 02:32:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.764 02:32:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.764 02:32:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.764 02:32:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.764 02:32:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:13.764 ************************************ 00:16:13.764 END TEST nvmf_multipath 00:16:13.764 ************************************ 00:16:13.764 00:16:13.764 real 0m20.830s 00:16:13.764 user 1m21.142s 00:16:13.764 sys 0m6.438s 00:16:13.764 02:32:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:13.764 02:32:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.764 02:32:54 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.764 02:32:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.764 02:32:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.764 02:32:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.764 ************************************ 00:16:13.764 START TEST nvmf_zcopy 00:16:13.764 ************************************ 00:16:13.764 02:32:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:14.023 * Looking for test storage... 00:16:14.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:14.023 02:32:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:14.023 02:32:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:14.023 02:32:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:14.023 02:32:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:14.023 02:32:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:14.023 02:32:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:14.023 02:32:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:14.023 02:32:54 -- scripts/common.sh@335 -- # IFS=.-: 00:16:14.023 02:32:54 -- scripts/common.sh@335 -- # read -ra ver1 00:16:14.023 02:32:54 -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.023 02:32:54 -- scripts/common.sh@336 -- # read -ra ver2 00:16:14.023 02:32:54 -- scripts/common.sh@337 -- # local 'op=<' 00:16:14.023 02:32:54 -- scripts/common.sh@339 -- # ver1_l=2 00:16:14.023 02:32:54 -- scripts/common.sh@340 -- # ver2_l=1 00:16:14.023 02:32:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:14.023 02:32:54 -- scripts/common.sh@343 -- # case "$op" in 00:16:14.023 02:32:54 -- scripts/common.sh@344 -- # : 1 00:16:14.023 02:32:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:14.023 02:32:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.023 02:32:54 -- scripts/common.sh@364 -- # decimal 1 00:16:14.023 02:32:54 -- scripts/common.sh@352 -- # local d=1 00:16:14.023 02:32:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.023 02:32:54 -- scripts/common.sh@354 -- # echo 1 00:16:14.023 02:32:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:14.023 02:32:54 -- scripts/common.sh@365 -- # decimal 2 00:16:14.023 02:32:54 -- scripts/common.sh@352 -- # local d=2 00:16:14.023 02:32:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.023 02:32:54 -- scripts/common.sh@354 -- # echo 2 00:16:14.023 02:32:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:14.023 02:32:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:14.023 02:32:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:14.023 02:32:54 -- scripts/common.sh@367 -- # return 0 00:16:14.023 02:32:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.023 02:32:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:14.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.023 --rc genhtml_branch_coverage=1 00:16:14.023 --rc genhtml_function_coverage=1 00:16:14.023 --rc genhtml_legend=1 00:16:14.024 --rc geninfo_all_blocks=1 00:16:14.024 --rc geninfo_unexecuted_blocks=1 00:16:14.024 00:16:14.024 ' 00:16:14.024 02:32:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.024 --rc genhtml_branch_coverage=1 00:16:14.024 --rc genhtml_function_coverage=1 00:16:14.024 --rc genhtml_legend=1 00:16:14.024 --rc geninfo_all_blocks=1 00:16:14.024 --rc geninfo_unexecuted_blocks=1 00:16:14.024 00:16:14.024 ' 00:16:14.024 02:32:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.024 --rc genhtml_branch_coverage=1 00:16:14.024 --rc genhtml_function_coverage=1 00:16:14.024 --rc genhtml_legend=1 00:16:14.024 --rc geninfo_all_blocks=1 00:16:14.024 --rc geninfo_unexecuted_blocks=1 00:16:14.024 00:16:14.024 ' 00:16:14.024 02:32:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:14.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.024 --rc genhtml_branch_coverage=1 00:16:14.024 --rc genhtml_function_coverage=1 00:16:14.024 --rc genhtml_legend=1 00:16:14.024 --rc geninfo_all_blocks=1 00:16:14.024 --rc geninfo_unexecuted_blocks=1 00:16:14.024 00:16:14.024 ' 00:16:14.024 02:32:54 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.024 02:32:54 -- nvmf/common.sh@7 -- # uname -s 00:16:14.024 02:32:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.024 02:32:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.024 02:32:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.024 02:32:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.024 02:32:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.024 02:32:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.024 02:32:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.024 02:32:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.024 02:32:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.024 02:32:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.024 02:32:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:16:14.024 02:32:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:16:14.024 02:32:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.024 02:32:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.024 02:32:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.024 02:32:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.024 02:32:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.024 02:32:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.024 02:32:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.024 02:32:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.024 02:32:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.024 02:32:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.024 02:32:54 -- paths/export.sh@5 -- # export PATH 00:16:14.024 02:32:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.024 02:32:54 -- nvmf/common.sh@46 -- # : 0 00:16:14.024 02:32:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.024 02:32:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.024 02:32:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.024 02:32:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.024 02:32:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.024 02:32:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.024 02:32:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.024 02:32:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.024 02:32:54 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:14.024 02:32:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.024 02:32:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.024 02:32:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.024 02:32:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.024 02:32:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.024 02:32:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.024 02:32:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.024 02:32:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.024 02:32:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.024 02:32:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.024 02:32:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.024 02:32:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.024 02:32:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.024 02:32:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.024 02:32:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.024 02:32:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.024 02:32:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.024 02:32:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.024 02:32:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.024 02:32:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.024 02:32:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.024 02:32:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.024 02:32:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.024 02:32:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.024 02:32:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.024 02:32:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.024 02:32:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.024 02:32:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.024 Cannot find device "nvmf_tgt_br" 00:16:14.024 02:32:54 -- nvmf/common.sh@154 -- # true 00:16:14.024 02:32:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.024 Cannot find device "nvmf_tgt_br2" 00:16:14.024 02:32:54 -- nvmf/common.sh@155 -- # true 00:16:14.024 02:32:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.024 02:32:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.283 Cannot find device "nvmf_tgt_br" 00:16:14.283 02:32:54 -- nvmf/common.sh@157 -- # true 00:16:14.283 02:32:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.283 Cannot find device "nvmf_tgt_br2" 00:16:14.283 02:32:54 -- nvmf/common.sh@158 -- # true 00:16:14.283 02:32:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.283 02:32:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.283 02:32:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.283 02:32:54 -- nvmf/common.sh@161 -- # true 00:16:14.283 02:32:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.283 02:32:54 -- nvmf/common.sh@162 -- # true 00:16:14.283 02:32:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.283 02:32:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.283 02:32:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.283 02:32:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.283 02:32:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.283 02:32:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.283 02:32:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.283 02:32:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.283 02:32:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.283 02:32:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.283 02:32:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.283 02:32:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.283 02:32:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.283 02:32:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.283 02:32:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.283 02:32:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.283 02:32:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.283 02:32:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.283 02:32:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.283 02:32:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.283 02:32:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.283 02:32:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.283 02:32:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.542 02:32:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:14.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:14.542 00:16:14.542 --- 10.0.0.2 ping statistics --- 00:16:14.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.542 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:14.542 02:32:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:14.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:14.542 00:16:14.542 --- 10.0.0.3 ping statistics --- 00:16:14.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.542 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:14.542 02:32:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:14.542 00:16:14.542 --- 10.0.0.1 ping statistics --- 00:16:14.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.542 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:14.542 02:32:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.542 02:32:54 -- nvmf/common.sh@421 -- # return 0 00:16:14.542 02:32:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.542 02:32:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.542 02:32:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.542 02:32:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.542 02:32:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.542 02:32:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.542 02:32:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.542 02:32:54 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:14.542 02:32:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.542 02:32:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.542 02:32:54 -- common/autotest_common.sh@10 -- # set +x 00:16:14.542 02:32:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:14.542 02:32:54 -- nvmf/common.sh@469 -- # nvmfpid=75535 00:16:14.542 02:32:54 -- nvmf/common.sh@470 -- # waitforlisten 75535 00:16:14.542 02:32:54 -- common/autotest_common.sh@829 -- # '[' -z 75535 ']' 00:16:14.542 02:32:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.542 02:32:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.542 02:32:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.542 02:32:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.542 02:32:54 -- common/autotest_common.sh@10 -- # set +x 00:16:14.543 [2024-11-21 02:32:55.015340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.543 [2024-11-21 02:32:55.015404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.543 [2024-11-21 02:32:55.150724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.801 [2024-11-21 02:32:55.246591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.801 [2024-11-21 02:32:55.246800] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.801 [2024-11-21 02:32:55.246822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.801 [2024-11-21 02:32:55.246834] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.801 [2024-11-21 02:32:55.246884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.737 02:32:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.737 02:32:56 -- common/autotest_common.sh@862 -- # return 0 00:16:15.737 02:32:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.737 02:32:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.737 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 02:32:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.738 02:32:56 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:15.738 02:32:56 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:15.738 02:32:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.738 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 [2024-11-21 02:32:56.094268] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.738 02:32:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.738 02:32:56 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:15.738 02:32:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.738 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 02:32:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.738 02:32:56 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.738 02:32:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.738 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 [2024-11-21 02:32:56.110349] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.738 02:32:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.738 02:32:56 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.738 02:32:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.738 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 02:32:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.738 02:32:56 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:15.738 02:32:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.738 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 malloc0 00:16:15.738 02:32:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.738 02:32:56 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:15.738 02:32:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.738 02:32:56 -- common/autotest_common.sh@10 -- # set +x 00:16:15.738 02:32:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.738 02:32:56 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:15.738 02:32:56 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:15.738 02:32:56 -- nvmf/common.sh@520 -- # config=() 00:16:15.738 02:32:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:15.738 02:32:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:15.738 02:32:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:15.738 { 00:16:15.738 "params": { 00:16:15.738 "name": "Nvme$subsystem", 00:16:15.738 "trtype": "$TEST_TRANSPORT", 00:16:15.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:15.738 "adrfam": "ipv4", 00:16:15.738 "trsvcid": "$NVMF_PORT", 00:16:15.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:15.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:15.738 "hdgst": ${hdgst:-false}, 00:16:15.738 "ddgst": ${ddgst:-false} 00:16:15.738 }, 00:16:15.738 "method": "bdev_nvme_attach_controller" 00:16:15.738 } 00:16:15.738 EOF 00:16:15.738 )") 00:16:15.738 02:32:56 -- nvmf/common.sh@542 -- # cat 00:16:15.738 02:32:56 -- nvmf/common.sh@544 -- # jq . 00:16:15.738 02:32:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:15.738 02:32:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:15.738 "params": { 00:16:15.738 "name": "Nvme1", 00:16:15.738 "trtype": "tcp", 00:16:15.738 "traddr": "10.0.0.2", 00:16:15.738 "adrfam": "ipv4", 00:16:15.738 "trsvcid": "4420", 00:16:15.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.738 "hdgst": false, 00:16:15.738 "ddgst": false 00:16:15.738 }, 00:16:15.738 "method": "bdev_nvme_attach_controller" 00:16:15.738 }' 00:16:15.738 [2024-11-21 02:32:56.192523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:15.738 [2024-11-21 02:32:56.192587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75586 ] 00:16:15.738 [2024-11-21 02:32:56.326498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.997 [2024-11-21 02:32:56.435482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.997 Running I/O for 10 seconds... 00:16:28.209 00:16:28.209 Latency(us) 00:16:28.209 [2024-11-21T02:33:08.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.209 [2024-11-21T02:33:08.856Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:28.209 Verification LBA range: start 0x0 length 0x1000 00:16:28.209 Nvme1n1 : 10.01 10946.30 85.52 0.00 0.00 11666.01 860.16 20494.89 00:16:28.209 [2024-11-21T02:33:08.856Z] =================================================================================================================== 00:16:28.209 [2024-11-21T02:33:08.856Z] Total : 10946.30 85.52 0.00 0.00 11666.01 860.16 20494.89 00:16:28.209 02:33:06 -- target/zcopy.sh@39 -- # perfpid=75704 00:16:28.209 02:33:06 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:28.209 02:33:06 -- common/autotest_common.sh@10 -- # set +x 00:16:28.209 02:33:06 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:28.209 02:33:06 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:28.209 02:33:06 -- nvmf/common.sh@520 -- # config=() 00:16:28.209 02:33:06 -- nvmf/common.sh@520 -- # local subsystem config 00:16:28.209 02:33:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:28.209 02:33:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:28.209 { 00:16:28.209 "params": { 00:16:28.209 "name": "Nvme$subsystem", 00:16:28.209 "trtype": "$TEST_TRANSPORT", 00:16:28.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.209 "adrfam": "ipv4", 00:16:28.209 "trsvcid": "$NVMF_PORT", 00:16:28.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.209 "hdgst": ${hdgst:-false}, 00:16:28.209 "ddgst": ${ddgst:-false} 00:16:28.209 }, 00:16:28.209 "method": "bdev_nvme_attach_controller" 00:16:28.209 } 00:16:28.209 EOF 00:16:28.209 )") 00:16:28.209 02:33:06 -- nvmf/common.sh@542 -- # cat 00:16:28.209 [2024-11-21 02:33:06.962164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:06.963394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 02:33:06 -- nvmf/common.sh@544 -- # jq . 00:16:28.209 2024/11/21 02:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 02:33:06 -- nvmf/common.sh@545 -- # IFS=, 00:16:28.209 02:33:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:28.209 "params": { 00:16:28.209 "name": "Nvme1", 00:16:28.209 "trtype": "tcp", 00:16:28.209 "traddr": "10.0.0.2", 00:16:28.209 "adrfam": "ipv4", 00:16:28.209 "trsvcid": "4420", 00:16:28.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.209 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.209 "hdgst": false, 00:16:28.209 "ddgst": false 00:16:28.209 }, 00:16:28.209 "method": "bdev_nvme_attach_controller" 00:16:28.209 }' 00:16:28.209 [2024-11-21 02:33:06.970031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:06.970248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:06.981991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:06.982030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:06.989986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:06.990020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:06.997987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:06.998022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.010004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.010043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.017435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:28.209 [2024-11-21 02:33:07.017521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75704 ] 00:16:28.209 [2024-11-21 02:33:07.018006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.018038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.030018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.030054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.038017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.038049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.046011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.046044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.058022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.058055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.070026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.070058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.082008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.082036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.094348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.094392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.106324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.106364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.118360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.118402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.209 [2024-11-21 02:33:07.130348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.209 [2024-11-21 02:33:07.130389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.209 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.142363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.142387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.153043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.210 [2024-11-21 02:33:07.154350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.154388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.166369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.166408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.178354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.178393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.190358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.190395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.202377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.202415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.214376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.214418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.222362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.222385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 [2024-11-21 02:33:07.225485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.234371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.234410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.242385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.242411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.254396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.254435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.262385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.262408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.274387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.274424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.286392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.286429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.298395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.298432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.310398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.310436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.322401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.322439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.334403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.334440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.342430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.342452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.354433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.354476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.366445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.366489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.378450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.378493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.390455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.390499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.402451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.402493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.414473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.414518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 Running I/O for 5 seconds... 00:16:28.210 [2024-11-21 02:33:07.426467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.426508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.210 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.210 [2024-11-21 02:33:07.439084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.210 [2024-11-21 02:33:07.439148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.449013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.449046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.463850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.463898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.480459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.480506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.495168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.495215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.511909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.511956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.527721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.527763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.545251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.545283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.556174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.556205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.570970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.571017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.586740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.586819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.604395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.604426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.614519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.614549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.624290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.624322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.637540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.637572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.653590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.653638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.669316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.669347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.680437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.680468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.688791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.688837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.704436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.704467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.715843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.715873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.723743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.723800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.738935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.738983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.754765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.754796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.771421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.771452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.783064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.783112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.799070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.799102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.211 [2024-11-21 02:33:07.815308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.211 [2024-11-21 02:33:07.815339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.211 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.832617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.832648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.848483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.848515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.864966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.865014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.881810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.881840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.898291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.898323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.914816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.914848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.931837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.931869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.947371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.947403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.958605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.958637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.974115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.974164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:07.990392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:07.990425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.007789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.007835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.024220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.024252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.035499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.035531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.050986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.051020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.067793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.067826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.083912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.083961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.094982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.095014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.110262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.110294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.127065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.127113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.143432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.143465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.160279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.160312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.177336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.177369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.193475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.193524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.210626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.210658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.226627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.226659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.237551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.212 [2024-11-21 02:33:08.237584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.212 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.212 [2024-11-21 02:33:08.253212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.253244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.264295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.264328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.279164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.279197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.290033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.290083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.299029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.299078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.312453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.312485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.327648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.327680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.340226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.340258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.351195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.351228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.367181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.367213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.378549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.378581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.393517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.393566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.404898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.404932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.420799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.420831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.437914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.437964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.447235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.447268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.456884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.456933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.466237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.466288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.475797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.475845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.213 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.213 [2024-11-21 02:33:08.489165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.213 [2024-11-21 02:33:08.489197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.497349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.497381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.512722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.512798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.523958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.524012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.538608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.538658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.548337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.548372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.558430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.558480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.568585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.568633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.582833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.582898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.599144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.599196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.615875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.615910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.630979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.631028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.642514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.642562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.658253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.658319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.674636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.674669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.692159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.692192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.708494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.708526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.724857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.724889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.741528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.741560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.758693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.758726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.774965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.774997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.791261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.791294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.807575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.807608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.823982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.824015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.214 [2024-11-21 02:33:08.840874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.214 [2024-11-21 02:33:08.840909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.214 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.855478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.855511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.871166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.871199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.887770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.887818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.904511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.904544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.920278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.920311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.937142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.937174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.952878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.952926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.970367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.970400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:08.985665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:08.985698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:09.001163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:09.001196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:09.018370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:09.018403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:09.034894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:09.034926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:09.052164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:09.052198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:09.068428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.474 [2024-11-21 02:33:09.068461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.474 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.474 [2024-11-21 02:33:09.085013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.475 [2024-11-21 02:33:09.085046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.475 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.475 [2024-11-21 02:33:09.102289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.475 [2024-11-21 02:33:09.102456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.475 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.118820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.118886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.134561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.134593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.151351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.151384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.168042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.168075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.184541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.184574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.201315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.201494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.217676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.217710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.234579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.234613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.251647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.251680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.267824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.267857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.284496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.284529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.300511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.300544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.316836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.316868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.750 [2024-11-21 02:33:09.332986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.750 [2024-11-21 02:33:09.333018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.750 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.751 [2024-11-21 02:33:09.349126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.751 [2024-11-21 02:33:09.349159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.751 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.751 [2024-11-21 02:33:09.366945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.751 [2024-11-21 02:33:09.366977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.751 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.751 [2024-11-21 02:33:09.380887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.751 [2024-11-21 02:33:09.380920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.751 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.396984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.397018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.412348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.412381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.428145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.428180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.444518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.444551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.460570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.460604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.477557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.477591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.493804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.493837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.510393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.510427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.527718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.527792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.543851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.543885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.560922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.560955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.576368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.576531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.587217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.587250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.602574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.602734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.620274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.620308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.635045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.635099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.014 [2024-11-21 02:33:09.646431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.014 [2024-11-21 02:33:09.646590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.014 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.273 [2024-11-21 02:33:09.662642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.273 [2024-11-21 02:33:09.662676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.678029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.678070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.694489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.694523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.711800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.711834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.726936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.726970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.743457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.743490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.759754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.759827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.776400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.776434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.791224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.791259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.806512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.806545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.821060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.821234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.837668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.837701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.853916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.853952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.870487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.870521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.887500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.887535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.274 [2024-11-21 02:33:09.903608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.274 [2024-11-21 02:33:09.903642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.274 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:09.920751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:09.920967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:09.935458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:09.935493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:09.951827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:09.951861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:09.967982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:09.968015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:09.985593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:09.985628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.000980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.001014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.017678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.017712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.034688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.034723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.052291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.052325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.067141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.067301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.082217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.082269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.093414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.093447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.109157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.109206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.125710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.125769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.142866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.142915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.158547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.158596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.535 [2024-11-21 02:33:10.169690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.535 [2024-11-21 02:33:10.169723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.535 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.795 [2024-11-21 02:33:10.184634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.795 [2024-11-21 02:33:10.184667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.795 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.795 [2024-11-21 02:33:10.195768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.795 [2024-11-21 02:33:10.195800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.795 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.795 [2024-11-21 02:33:10.211852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.795 [2024-11-21 02:33:10.211884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.795 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.795 [2024-11-21 02:33:10.227908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.795 [2024-11-21 02:33:10.227941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.795 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.795 [2024-11-21 02:33:10.245270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.795 [2024-11-21 02:33:10.245303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.795 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.260709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.260768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.271824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.271855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.287081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.287116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.303989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.304022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.321300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.321333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.338881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.338930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.353463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.353511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.369939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.369990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.386573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.386606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.401543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.401592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.417862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.417952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.796 [2024-11-21 02:33:10.433895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.796 [2024-11-21 02:33:10.433946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.796 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.055 [2024-11-21 02:33:10.450186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.055 [2024-11-21 02:33:10.450267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.055 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.055 [2024-11-21 02:33:10.462409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.055 [2024-11-21 02:33:10.462441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.055 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.478316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.478349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.494081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.494134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.511531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.511564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.528558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.528590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.544991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.545041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.561637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.561686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.578376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.578409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.594834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.594882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.612059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.612092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.628588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.628637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.645200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.645260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.660825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.660862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.676710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.676784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.056 [2024-11-21 02:33:10.692402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.056 [2024-11-21 02:33:10.692435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.056 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.702770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.702827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.716998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.717031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.732281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.732315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.749270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.749303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.765937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.765987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.783792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.783824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.798138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.798205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.814435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.814500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.830584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.830634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.847217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.847266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.863987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.864037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.879316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.879366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.890518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.890566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.906775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.906821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.923251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.923301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.940221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.940270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.316 [2024-11-21 02:33:10.955297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.316 [2024-11-21 02:33:10.955363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.316 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.575 [2024-11-21 02:33:10.970901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.575 [2024-11-21 02:33:10.970937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.575 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.575 [2024-11-21 02:33:10.981730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.575 [2024-11-21 02:33:10.981774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.575 2024/11/21 02:33:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.575 [2024-11-21 02:33:10.997686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.575 [2024-11-21 02:33:10.997735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.575 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.575 [2024-11-21 02:33:11.014337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.575 [2024-11-21 02:33:11.014370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.575 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.575 [2024-11-21 02:33:11.031329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.575 [2024-11-21 02:33:11.031378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.575 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.047206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.047240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.064187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.064221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.080351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.080384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.097190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.097224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.113410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.113460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.130470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.130503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.147518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.147552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.162987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.163020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.173803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.173851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.189244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.189278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.576 [2024-11-21 02:33:11.205475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.576 [2024-11-21 02:33:11.205525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.576 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.224270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.224304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.238805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.238849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.255166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.255199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.271824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.271857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.288765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.288797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.305085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.305118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.320635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.320668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.331698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.331732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.347564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.347598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.364033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.364069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.381059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.381093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.397561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.397594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.413862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.413952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.430285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.430318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.447939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.447972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.463095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.463129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.836 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.836 [2024-11-21 02:33:11.478649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.836 [2024-11-21 02:33:11.478686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.095 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.095 [2024-11-21 02:33:11.494211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.095 [2024-11-21 02:33:11.494245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.095 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.095 [2024-11-21 02:33:11.508893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.095 [2024-11-21 02:33:11.508926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.095 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.095 [2024-11-21 02:33:11.523898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.095 [2024-11-21 02:33:11.523931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.540579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.540614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.557311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.557345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.574506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.574539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.590672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.590705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.607557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.607723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.624939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.624972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.641265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.641300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.658776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.658839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.674614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.674649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.690572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.690607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.707192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.707352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.096 [2024-11-21 02:33:11.724211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.096 [2024-11-21 02:33:11.724245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.096 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.740133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.740168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.756759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.756792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.772670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.772704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.783975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.784008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.799984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.800018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.817124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.817158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.834429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.834590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.848298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.848331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.864323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.864356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.880131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.880165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.897140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.897174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.912902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.912935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.930177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.930210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.946606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.946639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.962787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.962819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.981002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.981038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.356 [2024-11-21 02:33:11.996005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.356 [2024-11-21 02:33:11.996043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.356 2024/11/21 02:33:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.006369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.006402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.020349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.020382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.034684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.034718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.049674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.049708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.065842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.065913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.082655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.082689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.099132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.099310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.116365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.116401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.132050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.132083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.143061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.143094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.159706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.159754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.174320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.174509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.189466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.189624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.201242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.201276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.217106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.616 [2024-11-21 02:33:12.217139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.616 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.616 [2024-11-21 02:33:12.233441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.617 [2024-11-21 02:33:12.233474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.617 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.617 [2024-11-21 02:33:12.249480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.617 [2024-11-21 02:33:12.249513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.617 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.266961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.266995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.282967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.283000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.299429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.299463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.316405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.316439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.333041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.333075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.349291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.349325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.360657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.360690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.375848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.375881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.392356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.392391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.409423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.409455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 [2024-11-21 02:33:12.424867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.424901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.876 00:16:31.876 Latency(us) 00:16:31.876 [2024-11-21T02:33:12.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.876 [2024-11-21T02:33:12.523Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:31.876 Nvme1n1 : 5.01 13344.44 104.25 0.00 0.00 9582.26 3902.37 21924.77 00:16:31.876 [2024-11-21T02:33:12.523Z] =================================================================================================================== 00:16:31.876 [2024-11-21T02:33:12.523Z] Total : 13344.44 104.25 0.00 0.00 9582.26 3902.37 21924.77 00:16:31.876 [2024-11-21 02:33:12.435433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.876 [2024-11-21 02:33:12.435466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.876 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.447384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.447416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.877 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.459386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.459416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.877 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.471391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.471420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.877 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.483392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.483421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.877 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.495391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.495420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.877 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.507412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.507442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.877 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:31.877 [2024-11-21 02:33:12.519398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.877 [2024-11-21 02:33:12.519427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.136 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.531403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.531431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.543403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.543430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.555406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.555434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.567406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.567434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.579427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.579455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.591433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.591462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.603420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.603449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.615423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.615451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.627444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.627473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.639448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.639478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.651431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.651459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.663436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.663465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.675469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.675502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.687494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.687703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.699453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.699485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.711448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.711477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.723467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.723497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 [2024-11-21 02:33:12.735471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:32.137 [2024-11-21 02:33:12.735501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:32.137 2024/11/21 02:33:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:32.137 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75704) - No such process 00:16:32.137 02:33:12 -- target/zcopy.sh@49 -- # wait 75704 00:16:32.137 02:33:12 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.137 02:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.137 02:33:12 -- common/autotest_common.sh@10 -- # set +x 00:16:32.137 02:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.137 02:33:12 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:32.137 02:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.137 02:33:12 -- common/autotest_common.sh@10 -- # set +x 00:16:32.137 delay0 00:16:32.137 02:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.137 02:33:12 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:32.137 02:33:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.137 02:33:12 -- common/autotest_common.sh@10 -- # set +x 00:16:32.137 02:33:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.137 02:33:12 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:32.396 [2024-11-21 02:33:12.928907] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:40.548 Initializing NVMe Controllers 00:16:40.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:40.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:40.548 Initialization complete. Launching workers. 00:16:40.548 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 262, failed: 21906 00:16:40.548 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22074, failed to submit 94 00:16:40.548 success 21954, unsuccess 120, failed 0 00:16:40.548 02:33:19 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:40.548 02:33:19 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:40.548 02:33:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:40.548 02:33:19 -- nvmf/common.sh@116 -- # sync 00:16:40.548 02:33:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:40.548 02:33:20 -- nvmf/common.sh@119 -- # set +e 00:16:40.548 02:33:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:40.548 02:33:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:40.548 rmmod nvme_tcp 00:16:40.548 rmmod nvme_fabrics 00:16:40.548 rmmod nvme_keyring 00:16:40.548 02:33:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:40.548 02:33:20 -- nvmf/common.sh@123 -- # set -e 00:16:40.548 02:33:20 -- nvmf/common.sh@124 -- # return 0 00:16:40.548 02:33:20 -- nvmf/common.sh@477 -- # '[' -n 75535 ']' 00:16:40.548 02:33:20 -- nvmf/common.sh@478 -- # killprocess 75535 00:16:40.548 02:33:20 -- common/autotest_common.sh@936 -- # '[' -z 75535 ']' 00:16:40.548 02:33:20 -- common/autotest_common.sh@940 -- # kill -0 75535 00:16:40.548 02:33:20 -- common/autotest_common.sh@941 -- # uname 00:16:40.548 02:33:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.548 02:33:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75535 00:16:40.548 killing process with pid 75535 00:16:40.548 02:33:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:40.548 02:33:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:40.548 02:33:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75535' 00:16:40.548 02:33:20 -- common/autotest_common.sh@955 -- # kill 75535 00:16:40.548 02:33:20 -- common/autotest_common.sh@960 -- # wait 75535 00:16:40.548 02:33:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:40.548 02:33:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:40.548 02:33:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:40.548 02:33:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.548 02:33:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:40.548 02:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.548 02:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.548 02:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.548 02:33:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:40.548 00:16:40.548 real 0m25.976s 00:16:40.548 user 0m39.843s 00:16:40.548 sys 0m8.376s 00:16:40.548 02:33:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:40.548 02:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.548 ************************************ 00:16:40.548 END TEST nvmf_zcopy 00:16:40.548 ************************************ 00:16:40.548 02:33:20 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:40.548 02:33:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.548 02:33:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.548 02:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.548 ************************************ 00:16:40.548 START TEST nvmf_nmic 00:16:40.548 ************************************ 00:16:40.548 02:33:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:40.548 * Looking for test storage... 00:16:40.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:40.548 02:33:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:40.548 02:33:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:40.548 02:33:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:40.548 02:33:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:40.548 02:33:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:40.548 02:33:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:40.548 02:33:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:40.548 02:33:20 -- scripts/common.sh@335 -- # IFS=.-: 00:16:40.548 02:33:20 -- scripts/common.sh@335 -- # read -ra ver1 00:16:40.548 02:33:20 -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.548 02:33:20 -- scripts/common.sh@336 -- # read -ra ver2 00:16:40.548 02:33:20 -- scripts/common.sh@337 -- # local 'op=<' 00:16:40.548 02:33:20 -- scripts/common.sh@339 -- # ver1_l=2 00:16:40.548 02:33:20 -- scripts/common.sh@340 -- # ver2_l=1 00:16:40.548 02:33:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:40.548 02:33:20 -- scripts/common.sh@343 -- # case "$op" in 00:16:40.548 02:33:20 -- scripts/common.sh@344 -- # : 1 00:16:40.548 02:33:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:40.548 02:33:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.548 02:33:20 -- scripts/common.sh@364 -- # decimal 1 00:16:40.548 02:33:20 -- scripts/common.sh@352 -- # local d=1 00:16:40.548 02:33:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.548 02:33:20 -- scripts/common.sh@354 -- # echo 1 00:16:40.548 02:33:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:40.548 02:33:20 -- scripts/common.sh@365 -- # decimal 2 00:16:40.548 02:33:20 -- scripts/common.sh@352 -- # local d=2 00:16:40.548 02:33:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.548 02:33:20 -- scripts/common.sh@354 -- # echo 2 00:16:40.548 02:33:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:40.548 02:33:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:40.548 02:33:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:40.548 02:33:20 -- scripts/common.sh@367 -- # return 0 00:16:40.548 02:33:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.548 02:33:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:40.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.548 --rc genhtml_branch_coverage=1 00:16:40.549 --rc genhtml_function_coverage=1 00:16:40.549 --rc genhtml_legend=1 00:16:40.549 --rc geninfo_all_blocks=1 00:16:40.549 --rc geninfo_unexecuted_blocks=1 00:16:40.549 00:16:40.549 ' 00:16:40.549 02:33:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:40.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.549 --rc genhtml_branch_coverage=1 00:16:40.549 --rc genhtml_function_coverage=1 00:16:40.549 --rc genhtml_legend=1 00:16:40.549 --rc geninfo_all_blocks=1 00:16:40.549 --rc geninfo_unexecuted_blocks=1 00:16:40.549 00:16:40.549 ' 00:16:40.549 02:33:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:40.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.549 --rc genhtml_branch_coverage=1 00:16:40.549 --rc genhtml_function_coverage=1 00:16:40.549 --rc genhtml_legend=1 00:16:40.549 --rc geninfo_all_blocks=1 00:16:40.549 --rc geninfo_unexecuted_blocks=1 00:16:40.549 00:16:40.549 ' 00:16:40.549 02:33:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:40.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.549 --rc genhtml_branch_coverage=1 00:16:40.549 --rc genhtml_function_coverage=1 00:16:40.549 --rc genhtml_legend=1 00:16:40.549 --rc geninfo_all_blocks=1 00:16:40.549 --rc geninfo_unexecuted_blocks=1 00:16:40.549 00:16:40.549 ' 00:16:40.549 02:33:20 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.549 02:33:20 -- nvmf/common.sh@7 -- # uname -s 00:16:40.549 02:33:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.549 02:33:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.549 02:33:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.549 02:33:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.549 02:33:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.549 02:33:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.549 02:33:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.549 02:33:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.549 02:33:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.549 02:33:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.549 02:33:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:16:40.549 02:33:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:16:40.549 02:33:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.549 02:33:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.549 02:33:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.549 02:33:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.549 02:33:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.549 02:33:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.549 02:33:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.549 02:33:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.549 02:33:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.549 02:33:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.549 02:33:20 -- paths/export.sh@5 -- # export PATH 00:16:40.549 02:33:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.549 02:33:20 -- nvmf/common.sh@46 -- # : 0 00:16:40.549 02:33:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:40.549 02:33:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:40.549 02:33:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:40.549 02:33:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.549 02:33:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.549 02:33:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:40.549 02:33:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:40.549 02:33:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:40.549 02:33:20 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.549 02:33:20 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.549 02:33:20 -- target/nmic.sh@14 -- # nvmftestinit 00:16:40.549 02:33:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:40.549 02:33:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.549 02:33:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:40.549 02:33:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:40.549 02:33:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:40.549 02:33:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.549 02:33:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.549 02:33:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.549 02:33:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:40.549 02:33:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:40.549 02:33:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:40.549 02:33:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:40.549 02:33:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:40.549 02:33:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:40.549 02:33:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.549 02:33:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.549 02:33:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:40.549 02:33:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:40.549 02:33:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.549 02:33:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.549 02:33:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.549 02:33:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.549 02:33:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.549 02:33:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.549 02:33:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.549 02:33:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.549 02:33:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:40.549 02:33:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:40.549 Cannot find device "nvmf_tgt_br" 00:16:40.549 02:33:20 -- nvmf/common.sh@154 -- # true 00:16:40.549 02:33:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.549 Cannot find device "nvmf_tgt_br2" 00:16:40.549 02:33:20 -- nvmf/common.sh@155 -- # true 00:16:40.549 02:33:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:40.549 02:33:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:40.549 Cannot find device "nvmf_tgt_br" 00:16:40.549 02:33:20 -- nvmf/common.sh@157 -- # true 00:16:40.549 02:33:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:40.549 Cannot find device "nvmf_tgt_br2" 00:16:40.549 02:33:20 -- nvmf/common.sh@158 -- # true 00:16:40.549 02:33:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:40.549 02:33:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:40.549 02:33:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.549 02:33:20 -- nvmf/common.sh@161 -- # true 00:16:40.549 02:33:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.549 02:33:20 -- nvmf/common.sh@162 -- # true 00:16:40.549 02:33:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.549 02:33:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.549 02:33:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.549 02:33:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.549 02:33:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.549 02:33:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.549 02:33:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.549 02:33:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:40.549 02:33:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:40.549 02:33:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:40.549 02:33:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:40.549 02:33:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:40.549 02:33:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:40.549 02:33:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.549 02:33:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.549 02:33:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.549 02:33:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:40.549 02:33:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:40.549 02:33:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.549 02:33:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.549 02:33:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.549 02:33:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.549 02:33:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.549 02:33:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:40.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:40.550 00:16:40.550 --- 10.0.0.2 ping statistics --- 00:16:40.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.550 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:40.550 02:33:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:40.550 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.550 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:40.550 00:16:40.550 --- 10.0.0.3 ping statistics --- 00:16:40.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.550 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:40.550 02:33:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:40.550 00:16:40.550 --- 10.0.0.1 ping statistics --- 00:16:40.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.550 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:40.550 02:33:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.550 02:33:20 -- nvmf/common.sh@421 -- # return 0 00:16:40.550 02:33:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:40.550 02:33:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.550 02:33:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:40.550 02:33:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:40.550 02:33:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.550 02:33:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:40.550 02:33:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:40.550 02:33:20 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:40.550 02:33:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:40.550 02:33:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.550 02:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.550 02:33:20 -- nvmf/common.sh@469 -- # nvmfpid=76049 00:16:40.550 02:33:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:40.550 02:33:20 -- nvmf/common.sh@470 -- # waitforlisten 76049 00:16:40.550 02:33:20 -- common/autotest_common.sh@829 -- # '[' -z 76049 ']' 00:16:40.550 02:33:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.550 02:33:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.550 02:33:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.550 02:33:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.550 02:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:40.550 [2024-11-21 02:33:21.049954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:40.550 [2024-11-21 02:33:21.050042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.550 [2024-11-21 02:33:21.188276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.808 [2024-11-21 02:33:21.279619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.808 [2024-11-21 02:33:21.279777] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.808 [2024-11-21 02:33:21.279792] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.808 [2024-11-21 02:33:21.279800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.808 [2024-11-21 02:33:21.280801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.808 [2024-11-21 02:33:21.280943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.808 [2024-11-21 02:33:21.281032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.808 [2024-11-21 02:33:21.281342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.741 02:33:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.741 02:33:22 -- common/autotest_common.sh@862 -- # return 0 00:16:41.741 02:33:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.741 02:33:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 02:33:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.741 02:33:22 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 [2024-11-21 02:33:22.108440] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 Malloc0 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 [2024-11-21 02:33:22.195204] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.741 test case1: single bdev can't be used in multiple subsystems 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:41.741 02:33:22 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@28 -- # nmic_status=0 00:16:41.741 02:33:22 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.741 [2024-11-21 02:33:22.218993] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:41.741 [2024-11-21 02:33:22.219036] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:41.741 [2024-11-21 02:33:22.219057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.741 2024/11/21 02:33:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:41.741 request: 00:16:41.741 { 00:16:41.741 "method": "nvmf_subsystem_add_ns", 00:16:41.741 "params": { 00:16:41.741 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.741 "namespace": { 00:16:41.741 "bdev_name": "Malloc0" 00:16:41.741 } 00:16:41.741 } 00:16:41.741 } 00:16:41.741 Got JSON-RPC error response 00:16:41.741 GoRPCClient: error on JSON-RPC call 00:16:41.741 02:33:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:41.741 02:33:22 -- target/nmic.sh@29 -- # nmic_status=1 00:16:41.741 02:33:22 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:41.741 02:33:22 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:41.741 Adding namespace failed - expected result. 00:16:41.741 02:33:22 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:41.741 test case2: host connect to nvmf target in multiple paths 00:16:41.741 02:33:22 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:41.741 02:33:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.741 02:33:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.742 [2024-11-21 02:33:22.231100] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:41.742 02:33:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.742 02:33:22 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.999 02:33:22 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:41.999 02:33:22 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.999 02:33:22 -- common/autotest_common.sh@1187 -- # local i=0 00:16:41.999 02:33:22 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.999 02:33:22 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:41.999 02:33:22 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:44.526 02:33:24 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:44.526 02:33:24 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:44.526 02:33:24 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:44.526 02:33:24 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:44.526 02:33:24 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.526 02:33:24 -- common/autotest_common.sh@1197 -- # return 0 00:16:44.526 02:33:24 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:44.526 [global] 00:16:44.526 thread=1 00:16:44.526 invalidate=1 00:16:44.526 rw=write 00:16:44.526 time_based=1 00:16:44.526 runtime=1 00:16:44.526 ioengine=libaio 00:16:44.526 direct=1 00:16:44.526 bs=4096 00:16:44.526 iodepth=1 00:16:44.526 norandommap=0 00:16:44.526 numjobs=1 00:16:44.526 00:16:44.526 verify_dump=1 00:16:44.526 verify_backlog=512 00:16:44.526 verify_state_save=0 00:16:44.526 do_verify=1 00:16:44.526 verify=crc32c-intel 00:16:44.526 [job0] 00:16:44.526 filename=/dev/nvme0n1 00:16:44.526 Could not set queue depth (nvme0n1) 00:16:44.526 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.526 fio-3.35 00:16:44.526 Starting 1 thread 00:16:45.460 00:16:45.460 job0: (groupid=0, jobs=1): err= 0: pid=76157: Thu Nov 21 02:33:25 2024 00:16:45.460 read: IOPS=3248, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:16:45.460 slat (usec): min=11, max=103, avg=14.04, stdev= 5.37 00:16:45.460 clat (usec): min=81, max=544, avg=146.95, stdev=20.50 00:16:45.460 lat (usec): min=123, max=561, avg=160.99, stdev=21.81 00:16:45.460 clat percentiles (usec): 00:16:45.460 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 133], 00:16:45.460 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:16:45.460 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 182], 00:16:45.460 | 99.00th=[ 210], 99.50th=[ 223], 99.90th=[ 273], 99.95th=[ 310], 00:16:45.460 | 99.99th=[ 545] 00:16:45.460 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:45.460 slat (usec): min=16, max=122, avg=21.00, stdev= 6.99 00:16:45.460 clat (usec): min=42, max=8013, avg=108.94, stdev=136.29 00:16:45.460 lat (usec): min=96, max=8030, avg=129.95, stdev=136.63 00:16:45.460 clat percentiles (usec): 00:16:45.460 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:16:45.460 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 105], 00:16:45.460 | 70.00th=[ 111], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 135], 00:16:45.460 | 99.00th=[ 157], 99.50th=[ 184], 99.90th=[ 717], 99.95th=[ 1156], 00:16:45.460 | 99.99th=[ 8029] 00:16:45.460 bw ( KiB/s): min=15080, max=15080, per=100.00%, avg=15080.00, stdev= 0.00, samples=1 00:16:45.460 iops : min= 3770, max= 3770, avg=3770.00, stdev= 0.00, samples=1 00:16:45.460 lat (usec) : 50=0.01%, 100=23.84%, 250=75.88%, 500=0.13%, 750=0.09% 00:16:45.460 lat (usec) : 1000=0.01% 00:16:45.460 lat (msec) : 2=0.01%, 10=0.01% 00:16:45.460 cpu : usr=2.30%, sys=9.00%, ctx=6845, majf=0, minf=5 00:16:45.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.460 issued rwts: total=3252,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.460 00:16:45.460 Run status group 0 (all jobs): 00:16:45.460 READ: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:16:45.460 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:45.460 00:16:45.460 Disk stats (read/write): 00:16:45.460 nvme0n1: ios=3122/3112, merge=0/0, ticks=477/356, in_queue=833, util=91.18% 00:16:45.460 02:33:25 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:45.460 02:33:25 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.460 02:33:25 -- common/autotest_common.sh@1208 -- # local i=0 00:16:45.460 02:33:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.460 02:33:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:45.460 02:33:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:45.460 02:33:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.460 02:33:25 -- common/autotest_common.sh@1220 -- # return 0 00:16:45.460 02:33:25 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:45.460 02:33:25 -- target/nmic.sh@53 -- # nvmftestfini 00:16:45.460 02:33:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:45.460 02:33:25 -- nvmf/common.sh@116 -- # sync 00:16:45.460 02:33:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:45.460 02:33:26 -- nvmf/common.sh@119 -- # set +e 00:16:45.460 02:33:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:45.460 02:33:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:45.460 rmmod nvme_tcp 00:16:45.460 rmmod nvme_fabrics 00:16:45.460 rmmod nvme_keyring 00:16:45.460 02:33:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:45.460 02:33:26 -- nvmf/common.sh@123 -- # set -e 00:16:45.460 02:33:26 -- nvmf/common.sh@124 -- # return 0 00:16:45.460 02:33:26 -- nvmf/common.sh@477 -- # '[' -n 76049 ']' 00:16:45.460 02:33:26 -- nvmf/common.sh@478 -- # killprocess 76049 00:16:45.460 02:33:26 -- common/autotest_common.sh@936 -- # '[' -z 76049 ']' 00:16:45.460 02:33:26 -- common/autotest_common.sh@940 -- # kill -0 76049 00:16:45.460 02:33:26 -- common/autotest_common.sh@941 -- # uname 00:16:45.460 02:33:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.460 02:33:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76049 00:16:45.460 02:33:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.460 killing process with pid 76049 00:16:45.460 02:33:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.460 02:33:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76049' 00:16:45.460 02:33:26 -- common/autotest_common.sh@955 -- # kill 76049 00:16:45.460 02:33:26 -- common/autotest_common.sh@960 -- # wait 76049 00:16:46.026 02:33:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:46.026 02:33:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:46.026 02:33:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:46.026 02:33:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.026 02:33:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:46.026 02:33:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.026 02:33:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.026 02:33:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.026 02:33:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:46.026 ************************************ 00:16:46.026 END TEST nvmf_nmic 00:16:46.027 ************************************ 00:16:46.027 00:16:46.027 real 0m6.061s 00:16:46.027 user 0m20.243s 00:16:46.027 sys 0m1.288s 00:16:46.027 02:33:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:46.027 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:16:46.027 02:33:26 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:46.027 02:33:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:46.027 02:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.027 02:33:26 -- common/autotest_common.sh@10 -- # set +x 00:16:46.027 ************************************ 00:16:46.027 START TEST nvmf_fio_target 00:16:46.027 ************************************ 00:16:46.027 02:33:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:46.027 * Looking for test storage... 00:16:46.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.027 02:33:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:46.027 02:33:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:46.027 02:33:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:46.285 02:33:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:46.285 02:33:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:46.285 02:33:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:46.285 02:33:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:46.285 02:33:26 -- scripts/common.sh@335 -- # IFS=.-: 00:16:46.285 02:33:26 -- scripts/common.sh@335 -- # read -ra ver1 00:16:46.285 02:33:26 -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.285 02:33:26 -- scripts/common.sh@336 -- # read -ra ver2 00:16:46.285 02:33:26 -- scripts/common.sh@337 -- # local 'op=<' 00:16:46.285 02:33:26 -- scripts/common.sh@339 -- # ver1_l=2 00:16:46.285 02:33:26 -- scripts/common.sh@340 -- # ver2_l=1 00:16:46.285 02:33:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:46.285 02:33:26 -- scripts/common.sh@343 -- # case "$op" in 00:16:46.285 02:33:26 -- scripts/common.sh@344 -- # : 1 00:16:46.285 02:33:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:46.285 02:33:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.285 02:33:26 -- scripts/common.sh@364 -- # decimal 1 00:16:46.285 02:33:26 -- scripts/common.sh@352 -- # local d=1 00:16:46.285 02:33:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.285 02:33:26 -- scripts/common.sh@354 -- # echo 1 00:16:46.285 02:33:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:46.285 02:33:26 -- scripts/common.sh@365 -- # decimal 2 00:16:46.285 02:33:26 -- scripts/common.sh@352 -- # local d=2 00:16:46.285 02:33:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.285 02:33:26 -- scripts/common.sh@354 -- # echo 2 00:16:46.285 02:33:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:46.285 02:33:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:46.285 02:33:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:46.285 02:33:26 -- scripts/common.sh@367 -- # return 0 00:16:46.285 02:33:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.285 02:33:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:46.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.285 --rc genhtml_branch_coverage=1 00:16:46.285 --rc genhtml_function_coverage=1 00:16:46.285 --rc genhtml_legend=1 00:16:46.285 --rc geninfo_all_blocks=1 00:16:46.285 --rc geninfo_unexecuted_blocks=1 00:16:46.285 00:16:46.285 ' 00:16:46.285 02:33:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:46.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.285 --rc genhtml_branch_coverage=1 00:16:46.285 --rc genhtml_function_coverage=1 00:16:46.285 --rc genhtml_legend=1 00:16:46.285 --rc geninfo_all_blocks=1 00:16:46.285 --rc geninfo_unexecuted_blocks=1 00:16:46.285 00:16:46.285 ' 00:16:46.285 02:33:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:46.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.285 --rc genhtml_branch_coverage=1 00:16:46.285 --rc genhtml_function_coverage=1 00:16:46.285 --rc genhtml_legend=1 00:16:46.285 --rc geninfo_all_blocks=1 00:16:46.285 --rc geninfo_unexecuted_blocks=1 00:16:46.285 00:16:46.285 ' 00:16:46.285 02:33:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:46.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.285 --rc genhtml_branch_coverage=1 00:16:46.285 --rc genhtml_function_coverage=1 00:16:46.285 --rc genhtml_legend=1 00:16:46.285 --rc geninfo_all_blocks=1 00:16:46.285 --rc geninfo_unexecuted_blocks=1 00:16:46.285 00:16:46.285 ' 00:16:46.285 02:33:26 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.285 02:33:26 -- nvmf/common.sh@7 -- # uname -s 00:16:46.285 02:33:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.285 02:33:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.285 02:33:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.285 02:33:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.285 02:33:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.285 02:33:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.285 02:33:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.285 02:33:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.285 02:33:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.285 02:33:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.285 02:33:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:16:46.285 02:33:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:16:46.285 02:33:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.285 02:33:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.285 02:33:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.285 02:33:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.285 02:33:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.285 02:33:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.285 02:33:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.285 02:33:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.285 02:33:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.286 02:33:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.286 02:33:26 -- paths/export.sh@5 -- # export PATH 00:16:46.286 02:33:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.286 02:33:26 -- nvmf/common.sh@46 -- # : 0 00:16:46.286 02:33:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:46.286 02:33:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:46.286 02:33:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:46.286 02:33:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.286 02:33:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.286 02:33:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:46.286 02:33:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:46.286 02:33:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:46.286 02:33:26 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.286 02:33:26 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.286 02:33:26 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.286 02:33:26 -- target/fio.sh@16 -- # nvmftestinit 00:16:46.286 02:33:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:46.286 02:33:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.286 02:33:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:46.286 02:33:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:46.286 02:33:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:46.286 02:33:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.286 02:33:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.286 02:33:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.286 02:33:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:46.286 02:33:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:46.286 02:33:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:46.286 02:33:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:46.286 02:33:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:46.286 02:33:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:46.286 02:33:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.286 02:33:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.286 02:33:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:46.286 02:33:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:46.286 02:33:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.286 02:33:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.286 02:33:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.286 02:33:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.286 02:33:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.286 02:33:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.286 02:33:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.286 02:33:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.286 02:33:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:46.286 02:33:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:46.286 Cannot find device "nvmf_tgt_br" 00:16:46.286 02:33:26 -- nvmf/common.sh@154 -- # true 00:16:46.286 02:33:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.286 Cannot find device "nvmf_tgt_br2" 00:16:46.286 02:33:26 -- nvmf/common.sh@155 -- # true 00:16:46.286 02:33:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:46.286 02:33:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:46.286 Cannot find device "nvmf_tgt_br" 00:16:46.286 02:33:26 -- nvmf/common.sh@157 -- # true 00:16:46.286 02:33:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:46.286 Cannot find device "nvmf_tgt_br2" 00:16:46.286 02:33:26 -- nvmf/common.sh@158 -- # true 00:16:46.286 02:33:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:46.286 02:33:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:46.286 02:33:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.286 02:33:26 -- nvmf/common.sh@161 -- # true 00:16:46.286 02:33:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.286 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.286 02:33:26 -- nvmf/common.sh@162 -- # true 00:16:46.286 02:33:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.286 02:33:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.286 02:33:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.286 02:33:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.286 02:33:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.286 02:33:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.544 02:33:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.544 02:33:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:46.544 02:33:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:46.544 02:33:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:46.544 02:33:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:46.544 02:33:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:46.544 02:33:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:46.544 02:33:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.544 02:33:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.544 02:33:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.544 02:33:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:46.544 02:33:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:46.544 02:33:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.544 02:33:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.544 02:33:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.544 02:33:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.544 02:33:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.544 02:33:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:46.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:46.544 00:16:46.544 --- 10.0.0.2 ping statistics --- 00:16:46.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.544 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:46.544 02:33:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:46.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:46.545 00:16:46.545 --- 10.0.0.3 ping statistics --- 00:16:46.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.545 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:46.545 02:33:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:46.545 00:16:46.545 --- 10.0.0.1 ping statistics --- 00:16:46.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.545 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:46.545 02:33:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.545 02:33:27 -- nvmf/common.sh@421 -- # return 0 00:16:46.545 02:33:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:46.545 02:33:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.545 02:33:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:46.545 02:33:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:46.545 02:33:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.545 02:33:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:46.545 02:33:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:46.545 02:33:27 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:46.545 02:33:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:46.545 02:33:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.545 02:33:27 -- common/autotest_common.sh@10 -- # set +x 00:16:46.545 02:33:27 -- nvmf/common.sh@469 -- # nvmfpid=76348 00:16:46.545 02:33:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.545 02:33:27 -- nvmf/common.sh@470 -- # waitforlisten 76348 00:16:46.545 02:33:27 -- common/autotest_common.sh@829 -- # '[' -z 76348 ']' 00:16:46.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.545 02:33:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.545 02:33:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.545 02:33:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.545 02:33:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.545 02:33:27 -- common/autotest_common.sh@10 -- # set +x 00:16:46.545 [2024-11-21 02:33:27.126246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.545 [2024-11-21 02:33:27.126314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.803 [2024-11-21 02:33:27.259909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.803 [2024-11-21 02:33:27.357924] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:46.803 [2024-11-21 02:33:27.358065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.803 [2024-11-21 02:33:27.358078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.803 [2024-11-21 02:33:27.358086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.803 [2024-11-21 02:33:27.358250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.803 [2024-11-21 02:33:27.358602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.803 [2024-11-21 02:33:27.359274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.803 [2024-11-21 02:33:27.359319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.734 02:33:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.734 02:33:28 -- common/autotest_common.sh@862 -- # return 0 00:16:47.734 02:33:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:47.734 02:33:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:47.734 02:33:28 -- common/autotest_common.sh@10 -- # set +x 00:16:47.734 02:33:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.734 02:33:28 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:47.990 [2024-11-21 02:33:28.388276] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.990 02:33:28 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.247 02:33:28 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:48.247 02:33:28 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.504 02:33:28 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:48.504 02:33:28 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.761 02:33:29 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:48.761 02:33:29 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.018 02:33:29 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:49.018 02:33:29 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:49.275 02:33:29 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.533 02:33:29 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:49.533 02:33:29 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.790 02:33:30 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:49.790 02:33:30 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.048 02:33:30 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:50.048 02:33:30 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:50.305 02:33:30 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:50.562 02:33:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.562 02:33:30 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.562 02:33:31 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:50.562 02:33:31 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:50.819 02:33:31 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.077 [2024-11-21 02:33:31.579674] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.077 02:33:31 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:51.335 02:33:31 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:51.592 02:33:32 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.592 02:33:32 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:51.592 02:33:32 -- common/autotest_common.sh@1187 -- # local i=0 00:16:51.592 02:33:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.592 02:33:32 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:51.592 02:33:32 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:51.592 02:33:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:54.120 02:33:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:54.120 02:33:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:54.120 02:33:34 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.120 02:33:34 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:54.120 02:33:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.120 02:33:34 -- common/autotest_common.sh@1197 -- # return 0 00:16:54.120 02:33:34 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:54.120 [global] 00:16:54.120 thread=1 00:16:54.120 invalidate=1 00:16:54.120 rw=write 00:16:54.120 time_based=1 00:16:54.120 runtime=1 00:16:54.120 ioengine=libaio 00:16:54.120 direct=1 00:16:54.120 bs=4096 00:16:54.120 iodepth=1 00:16:54.120 norandommap=0 00:16:54.120 numjobs=1 00:16:54.120 00:16:54.120 verify_dump=1 00:16:54.120 verify_backlog=512 00:16:54.120 verify_state_save=0 00:16:54.120 do_verify=1 00:16:54.120 verify=crc32c-intel 00:16:54.120 [job0] 00:16:54.120 filename=/dev/nvme0n1 00:16:54.120 [job1] 00:16:54.120 filename=/dev/nvme0n2 00:16:54.120 [job2] 00:16:54.120 filename=/dev/nvme0n3 00:16:54.120 [job3] 00:16:54.120 filename=/dev/nvme0n4 00:16:54.120 Could not set queue depth (nvme0n1) 00:16:54.120 Could not set queue depth (nvme0n2) 00:16:54.120 Could not set queue depth (nvme0n3) 00:16:54.120 Could not set queue depth (nvme0n4) 00:16:54.120 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.120 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.120 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.120 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.120 fio-3.35 00:16:54.120 Starting 4 threads 00:16:55.054 00:16:55.054 job0: (groupid=0, jobs=1): err= 0: pid=76638: Thu Nov 21 02:33:35 2024 00:16:55.054 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:55.054 slat (nsec): min=13797, max=56604, avg=17422.57, stdev=5323.30 00:16:55.054 clat (usec): min=160, max=354, avg=221.31, stdev=29.36 00:16:55.054 lat (usec): min=174, max=372, avg=238.73, stdev=30.21 00:16:55.054 clat percentiles (usec): 00:16:55.054 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:16:55.054 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:16:55.054 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 269], 00:16:55.054 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 326], 99.95th=[ 330], 00:16:55.054 | 99.99th=[ 355] 00:16:55.054 write: IOPS=2435, BW=9742KiB/s (9976kB/s)(9752KiB/1001msec); 0 zone resets 00:16:55.054 slat (nsec): min=19668, max=96712, avg=26038.30, stdev=7569.62 00:16:55.054 clat (usec): min=98, max=2637, avg=180.32, stdev=58.56 00:16:55.054 lat (usec): min=120, max=2660, avg=206.36, stdev=59.65 00:16:55.054 clat percentiles (usec): 00:16:55.054 | 1.00th=[ 123], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 155], 00:16:55.054 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 184], 00:16:55.054 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 231], 00:16:55.054 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 363], 99.95th=[ 578], 00:16:55.054 | 99.99th=[ 2638] 00:16:55.054 bw ( KiB/s): min= 8375, max= 8375, per=28.90%, avg=8375.00, stdev= 0.00, samples=1 00:16:55.054 iops : min= 2093, max= 2093, avg=2093.00, stdev= 0.00, samples=1 00:16:55.054 lat (usec) : 100=0.02%, 250=91.15%, 500=8.78%, 750=0.02% 00:16:55.054 lat (msec) : 4=0.02% 00:16:55.054 cpu : usr=1.90%, sys=7.00%, ctx=4486, majf=0, minf=5 00:16:55.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.054 issued rwts: total=2048,2438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.054 job1: (groupid=0, jobs=1): err= 0: pid=76639: Thu Nov 21 02:33:35 2024 00:16:55.054 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:55.055 slat (nsec): min=11024, max=62075, avg=19225.76, stdev=6123.87 00:16:55.055 clat (usec): min=174, max=881, avg=343.33, stdev=157.44 00:16:55.055 lat (usec): min=190, max=906, avg=362.56, stdev=159.80 00:16:55.055 clat percentiles (usec): 00:16:55.055 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 229], 00:16:55.055 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 273], 00:16:55.055 | 70.00th=[ 433], 80.00th=[ 494], 90.00th=[ 619], 95.00th=[ 676], 00:16:55.055 | 99.00th=[ 750], 99.50th=[ 775], 99.90th=[ 824], 99.95th=[ 881], 00:16:55.055 | 99.99th=[ 881] 00:16:55.055 write: IOPS=1740, BW=6961KiB/s (7128kB/s)(6968KiB/1001msec); 0 zone resets 00:16:55.055 slat (nsec): min=18656, max=99572, avg=28367.95, stdev=8715.53 00:16:55.055 clat (usec): min=104, max=7940, avg=221.89, stdev=223.95 00:16:55.055 lat (usec): min=134, max=7968, avg=250.26, stdev=224.79 00:16:55.055 clat percentiles (usec): 00:16:55.055 | 1.00th=[ 127], 5.00th=[ 143], 10.00th=[ 155], 20.00th=[ 169], 00:16:55.055 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 196], 60.00th=[ 204], 00:16:55.055 | 70.00th=[ 215], 80.00th=[ 231], 90.00th=[ 326], 95.00th=[ 396], 00:16:55.055 | 99.00th=[ 474], 99.50th=[ 502], 99.90th=[ 3097], 99.95th=[ 7963], 00:16:55.055 | 99.99th=[ 7963] 00:16:55.055 bw ( KiB/s): min= 8320, max= 8320, per=28.71%, avg=8320.00, stdev= 0.00, samples=1 00:16:55.055 iops : min= 2080, max= 2080, avg=2080.00, stdev= 0.00, samples=1 00:16:55.055 lat (usec) : 250=65.16%, 500=25.44%, 750=8.79%, 1000=0.46% 00:16:55.055 lat (msec) : 2=0.06%, 4=0.06%, 10=0.03% 00:16:55.055 cpu : usr=1.70%, sys=5.80%, ctx=3280, majf=0, minf=11 00:16:55.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.055 issued rwts: total=1536,1742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.055 job2: (groupid=0, jobs=1): err= 0: pid=76640: Thu Nov 21 02:33:35 2024 00:16:55.055 read: IOPS=1042, BW=4172KiB/s (4272kB/s)(4176KiB/1001msec) 00:16:55.055 slat (nsec): min=13120, max=78189, avg=25259.17, stdev=8304.15 00:16:55.055 clat (usec): min=191, max=2289, avg=415.20, stdev=108.08 00:16:55.055 lat (usec): min=220, max=2306, avg=440.45, stdev=106.65 00:16:55.055 clat percentiles (usec): 00:16:55.055 | 1.00th=[ 243], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 347], 00:16:55.055 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 392], 60.00th=[ 416], 00:16:55.055 | 70.00th=[ 445], 80.00th=[ 478], 90.00th=[ 519], 95.00th=[ 553], 00:16:55.055 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 1336], 99.95th=[ 2278], 00:16:55.055 | 99.99th=[ 2278] 00:16:55.055 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:55.055 slat (usec): min=16, max=116, avg=37.63, stdev=10.37 00:16:55.055 clat (usec): min=149, max=1704, avg=308.80, stdev=75.15 00:16:55.055 lat (usec): min=183, max=1768, avg=346.42, stdev=73.98 00:16:55.055 clat percentiles (usec): 00:16:55.055 | 1.00th=[ 192], 5.00th=[ 225], 10.00th=[ 239], 20.00th=[ 251], 00:16:55.055 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:16:55.055 | 70.00th=[ 330], 80.00th=[ 367], 90.00th=[ 404], 95.00th=[ 437], 00:16:55.055 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 578], 99.95th=[ 1713], 00:16:55.055 | 99.99th=[ 1713] 00:16:55.055 bw ( KiB/s): min= 7265, max= 7265, per=25.07%, avg=7265.00, stdev= 0.00, samples=1 00:16:55.055 iops : min= 1816, max= 1816, avg=1816.00, stdev= 0.00, samples=1 00:16:55.055 lat (usec) : 250=11.63%, 500=81.98%, 750=6.09%, 1000=0.19% 00:16:55.055 lat (msec) : 2=0.08%, 4=0.04% 00:16:55.055 cpu : usr=2.00%, sys=6.30%, ctx=2594, majf=0, minf=7 00:16:55.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.055 issued rwts: total=1044,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.055 job3: (groupid=0, jobs=1): err= 0: pid=76641: Thu Nov 21 02:33:35 2024 00:16:55.055 read: IOPS=1023, BW=4096KiB/s (4194kB/s)(4100KiB/1001msec) 00:16:55.055 slat (nsec): min=16928, max=97677, avg=30784.77, stdev=14079.52 00:16:55.055 clat (usec): min=215, max=989, avg=411.36, stdev=72.02 00:16:55.055 lat (usec): min=246, max=1006, avg=442.14, stdev=78.67 00:16:55.055 clat percentiles (usec): 00:16:55.055 | 1.00th=[ 281], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 355], 00:16:55.055 | 30.00th=[ 367], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 424], 00:16:55.055 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[ 498], 95.00th=[ 529], 00:16:55.055 | 99.00th=[ 635], 99.50th=[ 709], 99.90th=[ 824], 99.95th=[ 988], 00:16:55.055 | 99.99th=[ 988] 00:16:55.055 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:55.055 slat (usec): min=26, max=116, avg=43.01, stdev=10.11 00:16:55.055 clat (usec): min=162, max=3057, avg=307.23, stdev=105.45 00:16:55.055 lat (usec): min=210, max=3095, avg=350.24, stdev=105.54 00:16:55.055 clat percentiles (usec): 00:16:55.055 | 1.00th=[ 192], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 241], 00:16:55.055 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 306], 00:16:55.055 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 429], 00:16:55.055 | 99.00th=[ 465], 99.50th=[ 490], 99.90th=[ 1696], 99.95th=[ 3064], 00:16:55.055 | 99.99th=[ 3064] 00:16:55.055 bw ( KiB/s): min= 7105, max= 7105, per=24.52%, avg=7105.00, stdev= 0.00, samples=1 00:16:55.055 iops : min= 1776, max= 1776, avg=1776.00, stdev= 0.00, samples=1 00:16:55.055 lat (usec) : 250=15.62%, 500=80.55%, 750=3.59%, 1000=0.16% 00:16:55.055 lat (msec) : 2=0.04%, 4=0.04% 00:16:55.055 cpu : usr=2.50%, sys=6.70%, ctx=2563, majf=0, minf=16 00:16:55.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.055 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.055 00:16:55.055 Run status group 0 (all jobs): 00:16:55.055 READ: bw=22.1MiB/s (23.1MB/s), 4096KiB/s-8184KiB/s (4194kB/s-8380kB/s), io=22.1MiB (23.2MB), run=1001-1001msec 00:16:55.055 WRITE: bw=28.3MiB/s (29.7MB/s), 6138KiB/s-9742KiB/s (6285kB/s-9976kB/s), io=28.3MiB (29.7MB), run=1001-1001msec 00:16:55.055 00:16:55.055 Disk stats (read/write): 00:16:55.055 nvme0n1: ios=1790/2048, merge=0/0, ticks=452/396, in_queue=848, util=88.28% 00:16:55.055 nvme0n2: ios=1480/1536, merge=0/0, ticks=503/313, in_queue=816, util=87.20% 00:16:55.055 nvme0n3: ios=1024/1205, merge=0/0, ticks=427/365, in_queue=792, util=89.09% 00:16:55.055 nvme0n4: ios=1024/1166, merge=0/0, ticks=430/364, in_queue=794, util=89.64% 00:16:55.055 02:33:35 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:55.055 [global] 00:16:55.055 thread=1 00:16:55.055 invalidate=1 00:16:55.055 rw=randwrite 00:16:55.055 time_based=1 00:16:55.055 runtime=1 00:16:55.055 ioengine=libaio 00:16:55.055 direct=1 00:16:55.055 bs=4096 00:16:55.055 iodepth=1 00:16:55.055 norandommap=0 00:16:55.055 numjobs=1 00:16:55.055 00:16:55.055 verify_dump=1 00:16:55.055 verify_backlog=512 00:16:55.055 verify_state_save=0 00:16:55.055 do_verify=1 00:16:55.055 verify=crc32c-intel 00:16:55.055 [job0] 00:16:55.055 filename=/dev/nvme0n1 00:16:55.055 [job1] 00:16:55.055 filename=/dev/nvme0n2 00:16:55.055 [job2] 00:16:55.055 filename=/dev/nvme0n3 00:16:55.055 [job3] 00:16:55.055 filename=/dev/nvme0n4 00:16:55.313 Could not set queue depth (nvme0n1) 00:16:55.313 Could not set queue depth (nvme0n2) 00:16:55.313 Could not set queue depth (nvme0n3) 00:16:55.313 Could not set queue depth (nvme0n4) 00:16:55.313 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.313 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.313 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.313 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.313 fio-3.35 00:16:55.313 Starting 4 threads 00:16:56.690 00:16:56.690 job0: (groupid=0, jobs=1): err= 0: pid=76694: Thu Nov 21 02:33:36 2024 00:16:56.690 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:56.690 slat (nsec): min=13639, max=78730, avg=17271.51, stdev=5896.06 00:16:56.690 clat (usec): min=164, max=1681, avg=344.54, stdev=44.68 00:16:56.690 lat (usec): min=181, max=1696, avg=361.81, stdev=44.69 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:16:56.690 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:16:56.690 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 392], 00:16:56.690 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 562], 99.95th=[ 1680], 00:16:56.690 | 99.99th=[ 1680] 00:16:56.690 write: IOPS=1539, BW=6158KiB/s (6306kB/s)(6164KiB/1001msec); 0 zone resets 00:16:56.690 slat (usec): min=21, max=102, avg=35.04, stdev= 9.23 00:16:56.690 clat (usec): min=131, max=401, avg=249.19, stdev=28.04 00:16:56.690 lat (usec): min=171, max=432, avg=284.24, stdev=27.85 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 196], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:16:56.690 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:16:56.690 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 293], 00:16:56.690 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 400], 00:16:56.690 | 99.99th=[ 400] 00:16:56.690 bw ( KiB/s): min= 8192, max= 8192, per=29.67%, avg=8192.00, stdev= 0.00, samples=1 00:16:56.690 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:56.690 lat (usec) : 250=28.27%, 500=71.63%, 750=0.06% 00:16:56.690 lat (msec) : 2=0.03% 00:16:56.690 cpu : usr=1.40%, sys=5.80%, ctx=3077, majf=0, minf=13 00:16:56.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 issued rwts: total=1536,1541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.690 job1: (groupid=0, jobs=1): err= 0: pid=76695: Thu Nov 21 02:33:36 2024 00:16:56.690 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:56.690 slat (nsec): min=17172, max=85907, avg=25184.12, stdev=6109.93 00:16:56.690 clat (usec): min=178, max=2167, avg=335.47, stdev=54.80 00:16:56.690 lat (usec): min=201, max=2189, avg=360.65, stdev=54.78 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 314], 00:16:56.690 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:16:56.690 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 379], 00:16:56.690 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 791], 99.95th=[ 2180], 00:16:56.690 | 99.99th=[ 2180] 00:16:56.690 write: IOPS=1538, BW=6154KiB/s (6302kB/s)(6160KiB/1001msec); 0 zone resets 00:16:56.690 slat (nsec): min=24925, max=97302, avg=34938.50, stdev=7983.56 00:16:56.690 clat (usec): min=138, max=420, avg=249.32, stdev=27.75 00:16:56.690 lat (usec): min=180, max=453, avg=284.26, stdev=28.27 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:16:56.690 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:16:56.690 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:16:56.690 | 99.00th=[ 334], 99.50th=[ 367], 99.90th=[ 400], 99.95th=[ 420], 00:16:56.690 | 99.99th=[ 420] 00:16:56.690 bw ( KiB/s): min= 8192, max= 8192, per=29.67%, avg=8192.00, stdev= 0.00, samples=1 00:16:56.690 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:56.690 lat (usec) : 250=27.89%, 500=72.01%, 750=0.03%, 1000=0.03% 00:16:56.690 lat (msec) : 4=0.03% 00:16:56.690 cpu : usr=2.00%, sys=6.70%, ctx=3077, majf=0, minf=9 00:16:56.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 issued rwts: total=1536,1540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.690 job2: (groupid=0, jobs=1): err= 0: pid=76696: Thu Nov 21 02:33:36 2024 00:16:56.690 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:56.690 slat (nsec): min=10100, max=53084, avg=16164.31, stdev=5224.86 00:16:56.690 clat (usec): min=171, max=802, avg=312.37, stdev=82.69 00:16:56.690 lat (usec): min=188, max=820, avg=328.54, stdev=81.07 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 225], 00:16:56.690 | 30.00th=[ 239], 40.00th=[ 262], 50.00th=[ 330], 60.00th=[ 351], 00:16:56.690 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 445], 00:16:56.690 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 799], 00:16:56.690 | 99.99th=[ 799] 00:16:56.690 write: IOPS=1832, BW=7329KiB/s (7505kB/s)(7336KiB/1001msec); 0 zone resets 00:16:56.690 slat (usec): min=11, max=116, avg=25.41, stdev= 8.65 00:16:56.690 clat (usec): min=137, max=418, avg=241.29, stdev=51.52 00:16:56.690 lat (usec): min=160, max=454, avg=266.70, stdev=49.46 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 194], 00:16:56.690 | 30.00th=[ 206], 40.00th=[ 217], 50.00th=[ 237], 60.00th=[ 255], 00:16:56.690 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 330], 00:16:56.690 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 416], 99.95th=[ 420], 00:16:56.690 | 99.99th=[ 420] 00:16:56.690 bw ( KiB/s): min= 6472, max= 8192, per=26.55%, avg=7332.00, stdev=1216.22, samples=2 00:16:56.690 iops : min= 1618, max= 2048, avg=1833.00, stdev=304.06, samples=2 00:16:56.690 lat (usec) : 250=47.72%, 500=52.02%, 750=0.24%, 1000=0.03% 00:16:56.690 cpu : usr=1.70%, sys=5.20%, ctx=3376, majf=0, minf=9 00:16:56.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 issued rwts: total=1536,1834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.690 job3: (groupid=0, jobs=1): err= 0: pid=76697: Thu Nov 21 02:33:36 2024 00:16:56.690 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:56.690 slat (usec): min=10, max=150, avg=15.97, stdev= 6.74 00:16:56.690 clat (usec): min=120, max=3949, avg=303.85, stdev=163.82 00:16:56.690 lat (usec): min=173, max=3973, avg=319.83, stdev=163.34 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 212], 00:16:56.690 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 269], 60.00th=[ 347], 00:16:56.690 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 433], 00:16:56.690 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 3392], 99.95th=[ 3949], 00:16:56.690 | 99.99th=[ 3949] 00:16:56.690 write: IOPS=1993, BW=7972KiB/s (8163kB/s)(7980KiB/1001msec); 0 zone resets 00:16:56.690 slat (usec): min=10, max=125, avg=22.42, stdev= 7.86 00:16:56.690 clat (usec): min=108, max=3232, avg=229.66, stdev=113.97 00:16:56.690 lat (usec): min=137, max=3253, avg=252.08, stdev=113.52 00:16:56.690 clat percentiles (usec): 00:16:56.690 | 1.00th=[ 127], 5.00th=[ 141], 10.00th=[ 151], 20.00th=[ 167], 00:16:56.690 | 30.00th=[ 180], 40.00th=[ 196], 50.00th=[ 212], 60.00th=[ 247], 00:16:56.690 | 70.00th=[ 269], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 330], 00:16:56.690 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 2868], 99.95th=[ 3228], 00:16:56.690 | 99.99th=[ 3228] 00:16:56.690 bw ( KiB/s): min= 8192, max= 8192, per=29.67%, avg=8192.00, stdev= 0.00, samples=1 00:16:56.690 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:56.690 lat (usec) : 250=54.60%, 500=44.49%, 750=0.68% 00:16:56.690 lat (msec) : 2=0.08%, 4=0.14% 00:16:56.690 cpu : usr=1.60%, sys=4.90%, ctx=3545, majf=0, minf=16 00:16:56.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:56.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:56.690 issued rwts: total=1536,1995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:56.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:56.690 00:16:56.690 Run status group 0 (all jobs): 00:16:56.690 READ: bw=24.0MiB/s (25.1MB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:16:56.690 WRITE: bw=27.0MiB/s (28.3MB/s), 6154KiB/s-7972KiB/s (6302kB/s-8163kB/s), io=27.0MiB (28.3MB), run=1001-1001msec 00:16:56.690 00:16:56.690 Disk stats (read/write): 00:16:56.690 nvme0n1: ios=1201/1536, merge=0/0, ticks=448/414, in_queue=862, util=89.68% 00:16:56.690 nvme0n2: ios=1178/1536, merge=0/0, ticks=427/405, in_queue=832, util=88.21% 00:16:56.690 nvme0n3: ios=1411/1536, merge=0/0, ticks=442/385, in_queue=827, util=89.11% 00:16:56.690 nvme0n4: ios=1520/1536, merge=0/0, ticks=469/343, in_queue=812, util=88.83% 00:16:56.690 02:33:36 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:56.690 [global] 00:16:56.690 thread=1 00:16:56.690 invalidate=1 00:16:56.690 rw=write 00:16:56.690 time_based=1 00:16:56.690 runtime=1 00:16:56.690 ioengine=libaio 00:16:56.690 direct=1 00:16:56.690 bs=4096 00:16:56.690 iodepth=128 00:16:56.690 norandommap=0 00:16:56.690 numjobs=1 00:16:56.690 00:16:56.690 verify_dump=1 00:16:56.690 verify_backlog=512 00:16:56.690 verify_state_save=0 00:16:56.690 do_verify=1 00:16:56.690 verify=crc32c-intel 00:16:56.690 [job0] 00:16:56.690 filename=/dev/nvme0n1 00:16:56.690 [job1] 00:16:56.690 filename=/dev/nvme0n2 00:16:56.690 [job2] 00:16:56.690 filename=/dev/nvme0n3 00:16:56.690 [job3] 00:16:56.690 filename=/dev/nvme0n4 00:16:56.690 Could not set queue depth (nvme0n1) 00:16:56.690 Could not set queue depth (nvme0n2) 00:16:56.690 Could not set queue depth (nvme0n3) 00:16:56.690 Could not set queue depth (nvme0n4) 00:16:56.690 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.690 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.690 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.690 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:56.690 fio-3.35 00:16:56.690 Starting 4 threads 00:16:58.063 00:16:58.063 job0: (groupid=0, jobs=1): err= 0: pid=76758: Thu Nov 21 02:33:38 2024 00:16:58.063 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:16:58.063 slat (usec): min=8, max=5126, avg=112.80, stdev=582.51 00:16:58.063 clat (usec): min=10365, max=19796, avg=14642.68, stdev=1272.20 00:16:58.063 lat (usec): min=10380, max=22166, avg=14755.48, stdev=1282.35 00:16:58.063 clat percentiles (usec): 00:16:58.063 | 1.00th=[10814], 5.00th=[11863], 10.00th=[13042], 20.00th=[13960], 00:16:58.063 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14877], 60.00th=[15008], 00:16:58.063 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:16:58.063 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:16:58.063 | 99.99th=[19792] 00:16:58.063 write: IOPS=4360, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1003msec); 0 zone resets 00:16:58.063 slat (usec): min=11, max=5696, avg=115.73, stdev=555.07 00:16:58.063 clat (usec): min=354, max=20661, avg=15240.82, stdev=2214.74 00:16:58.063 lat (usec): min=4692, max=20696, avg=15356.55, stdev=2183.26 00:16:58.063 clat percentiles (usec): 00:16:58.063 | 1.00th=[ 5932], 5.00th=[11469], 10.00th=[11994], 20.00th=[13304], 00:16:58.063 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:16:58.063 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17433], 95.00th=[18482], 00:16:58.063 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:16:58.063 | 99.99th=[20579] 00:16:58.063 bw ( KiB/s): min=16384, max=17584, per=33.80%, avg=16984.00, stdev=848.53, samples=2 00:16:58.063 iops : min= 4096, max= 4396, avg=4246.00, stdev=212.13, samples=2 00:16:58.063 lat (usec) : 500=0.01% 00:16:58.063 lat (msec) : 10=0.55%, 20=99.42%, 50=0.01% 00:16:58.063 cpu : usr=4.39%, sys=11.38%, ctx=492, majf=0, minf=3 00:16:58.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:58.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.063 issued rwts: total=4096,4374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.063 job1: (groupid=0, jobs=1): err= 0: pid=76759: Thu Nov 21 02:33:38 2024 00:16:58.063 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:16:58.063 slat (usec): min=5, max=8848, avg=247.59, stdev=1156.64 00:16:58.063 clat (usec): min=20268, max=40149, avg=32035.04, stdev=2631.38 00:16:58.063 lat (usec): min=20292, max=40178, avg=32282.62, stdev=2423.71 00:16:58.063 clat percentiles (usec): 00:16:58.063 | 1.00th=[24249], 5.00th=[26346], 10.00th=[28181], 20.00th=[30540], 00:16:58.063 | 30.00th=[31327], 40.00th=[32113], 50.00th=[32637], 60.00th=[33162], 00:16:58.063 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:16:58.063 | 99.00th=[35914], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:16:58.063 | 99.99th=[40109] 00:16:58.063 write: IOPS=2294, BW=9179KiB/s (9400kB/s)(9216KiB/1004msec); 0 zone resets 00:16:58.063 slat (usec): min=16, max=10933, avg=204.93, stdev=985.81 00:16:58.063 clat (usec): min=3148, max=40448, avg=26373.81, stdev=5309.99 00:16:58.063 lat (usec): min=7639, max=40470, avg=26578.74, stdev=5296.11 00:16:58.063 clat percentiles (usec): 00:16:58.063 | 1.00th=[10159], 5.00th=[17433], 10.00th=[19268], 20.00th=[22676], 00:16:58.063 | 30.00th=[24249], 40.00th=[25035], 50.00th=[25297], 60.00th=[27132], 00:16:58.063 | 70.00th=[29230], 80.00th=[32113], 90.00th=[33162], 95.00th=[34866], 00:16:58.063 | 99.00th=[37487], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:16:58.063 | 99.99th=[40633] 00:16:58.063 bw ( KiB/s): min= 8208, max= 9226, per=17.35%, avg=8717.00, stdev=719.83, samples=2 00:16:58.063 iops : min= 2052, max= 2306, avg=2179.00, stdev=179.61, samples=2 00:16:58.063 lat (msec) : 4=0.02%, 10=0.39%, 20=5.08%, 50=94.51% 00:16:58.063 cpu : usr=2.69%, sys=6.88%, ctx=236, majf=0, minf=3 00:16:58.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:58.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.063 issued rwts: total=2048,2304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.063 job2: (groupid=0, jobs=1): err= 0: pid=76760: Thu Nov 21 02:33:38 2024 00:16:58.063 read: IOPS=1784, BW=7139KiB/s (7311kB/s)(7168KiB/1004msec) 00:16:58.063 slat (usec): min=13, max=11104, avg=262.98, stdev=1330.78 00:16:58.063 clat (usec): min=1104, max=45544, avg=32814.38, stdev=5083.40 00:16:58.063 lat (usec): min=8308, max=51256, avg=33077.36, stdev=4947.12 00:16:58.063 clat percentiles (usec): 00:16:58.063 | 1.00th=[ 8717], 5.00th=[24773], 10.00th=[30540], 20.00th=[31065], 00:16:58.063 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32900], 60.00th=[33817], 00:16:58.063 | 70.00th=[35390], 80.00th=[35914], 90.00th=[36439], 95.00th=[38536], 00:16:58.063 | 99.00th=[44827], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:16:58.064 | 99.99th=[45351] 00:16:58.064 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:16:58.064 slat (usec): min=12, max=8114, avg=249.65, stdev=845.81 00:16:58.064 clat (usec): min=16230, max=50332, avg=33007.88, stdev=7563.28 00:16:58.064 lat (usec): min=16256, max=50373, avg=33257.53, stdev=7575.34 00:16:58.064 clat percentiles (usec): 00:16:58.064 | 1.00th=[16712], 5.00th=[20841], 10.00th=[21103], 20.00th=[29492], 00:16:58.064 | 30.00th=[31851], 40.00th=[32900], 50.00th=[33424], 60.00th=[33817], 00:16:58.064 | 70.00th=[34341], 80.00th=[35914], 90.00th=[44303], 95.00th=[48497], 00:16:58.064 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50594], 00:16:58.064 | 99.99th=[50594] 00:16:58.064 bw ( KiB/s): min= 8192, max= 8192, per=16.30%, avg=8192.00, stdev= 0.00, samples=2 00:16:58.064 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:16:58.064 lat (msec) : 2=0.03%, 10=0.83%, 20=3.39%, 50=95.39%, 100=0.36% 00:16:58.064 cpu : usr=2.19%, sys=7.28%, ctx=277, majf=0, minf=6 00:16:58.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:58.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.064 issued rwts: total=1792,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.064 job3: (groupid=0, jobs=1): err= 0: pid=76761: Thu Nov 21 02:33:38 2024 00:16:58.064 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:16:58.064 slat (usec): min=9, max=7562, avg=133.66, stdev=817.85 00:16:58.064 clat (usec): min=10250, max=27306, avg=16983.32, stdev=1424.28 00:16:58.064 lat (usec): min=10271, max=27889, avg=17116.98, stdev=1595.72 00:16:58.064 clat percentiles (usec): 00:16:58.064 | 1.00th=[12911], 5.00th=[15401], 10.00th=[15664], 20.00th=[16188], 00:16:58.064 | 30.00th=[16450], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:16:58.064 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18482], 95.00th=[19268], 00:16:58.064 | 99.00th=[22938], 99.50th=[23725], 99.90th=[24511], 99.95th=[25035], 00:16:58.064 | 99.99th=[27395] 00:16:58.064 write: IOPS=3879, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1005msec); 0 zone resets 00:16:58.064 slat (usec): min=10, max=9734, avg=126.30, stdev=818.75 00:16:58.064 clat (usec): min=280, max=30902, avg=16903.46, stdev=2654.08 00:16:58.064 lat (usec): min=5459, max=31005, avg=17029.76, stdev=2679.55 00:16:58.064 clat percentiles (usec): 00:16:58.064 | 1.00th=[ 6652], 5.00th=[11207], 10.00th=[15139], 20.00th=[15926], 00:16:58.064 | 30.00th=[16319], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:16:58.064 | 70.00th=[17695], 80.00th=[18220], 90.00th=[19792], 95.00th=[21627], 00:16:58.064 | 99.00th=[22152], 99.50th=[22938], 99.90th=[30278], 99.95th=[30802], 00:16:58.064 | 99.99th=[30802] 00:16:58.064 bw ( KiB/s): min=13840, max=16328, per=30.02%, avg=15084.00, stdev=1759.28, samples=2 00:16:58.064 iops : min= 3460, max= 4082, avg=3771.00, stdev=439.82, samples=2 00:16:58.064 lat (usec) : 500=0.01% 00:16:58.064 lat (msec) : 10=1.31%, 20=92.56%, 50=6.12% 00:16:58.064 cpu : usr=4.38%, sys=10.66%, ctx=180, majf=0, minf=1 00:16:58.064 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:58.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.064 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.064 issued rwts: total=3584,3899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.064 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.064 00:16:58.064 Run status group 0 (all jobs): 00:16:58.064 READ: bw=44.8MiB/s (47.0MB/s), 7139KiB/s-16.0MiB/s (7311kB/s-16.7MB/s), io=45.0MiB (47.2MB), run=1003-1005msec 00:16:58.064 WRITE: bw=49.1MiB/s (51.5MB/s), 8159KiB/s-17.0MiB/s (8355kB/s-17.9MB/s), io=49.3MiB (51.7MB), run=1003-1005msec 00:16:58.064 00:16:58.064 Disk stats (read/write): 00:16:58.064 nvme0n1: ios=3634/3731, merge=0/0, ticks=16512/17091, in_queue=33603, util=88.98% 00:16:58.064 nvme0n2: ios=1806/2048, merge=0/0, ticks=14049/11919, in_queue=25968, util=89.28% 00:16:58.064 nvme0n3: ios=1536/1759, merge=0/0, ticks=12414/13856, in_queue=26270, util=89.08% 00:16:58.064 nvme0n4: ios=3078/3325, merge=0/0, ticks=23987/24532, in_queue=48519, util=89.73% 00:16:58.064 02:33:38 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:58.064 [global] 00:16:58.064 thread=1 00:16:58.064 invalidate=1 00:16:58.064 rw=randwrite 00:16:58.064 time_based=1 00:16:58.064 runtime=1 00:16:58.064 ioengine=libaio 00:16:58.064 direct=1 00:16:58.064 bs=4096 00:16:58.064 iodepth=128 00:16:58.064 norandommap=0 00:16:58.064 numjobs=1 00:16:58.064 00:16:58.064 verify_dump=1 00:16:58.064 verify_backlog=512 00:16:58.064 verify_state_save=0 00:16:58.064 do_verify=1 00:16:58.064 verify=crc32c-intel 00:16:58.064 [job0] 00:16:58.064 filename=/dev/nvme0n1 00:16:58.064 [job1] 00:16:58.064 filename=/dev/nvme0n2 00:16:58.064 [job2] 00:16:58.064 filename=/dev/nvme0n3 00:16:58.064 [job3] 00:16:58.064 filename=/dev/nvme0n4 00:16:58.064 Could not set queue depth (nvme0n1) 00:16:58.064 Could not set queue depth (nvme0n2) 00:16:58.064 Could not set queue depth (nvme0n3) 00:16:58.064 Could not set queue depth (nvme0n4) 00:16:58.064 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.064 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.064 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.064 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:58.064 fio-3.35 00:16:58.064 Starting 4 threads 00:16:59.439 00:16:59.439 job0: (groupid=0, jobs=1): err= 0: pid=76818: Thu Nov 21 02:33:39 2024 00:16:59.439 read: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec) 00:16:59.439 slat (usec): min=9, max=47404, avg=362.12, stdev=2551.01 00:16:59.439 clat (msec): min=17, max=130, avg=47.33, stdev=29.98 00:16:59.439 lat (msec): min=17, max=130, avg=47.69, stdev=30.26 00:16:59.439 clat percentiles (msec): 00:16:59.439 | 1.00th=[ 22], 5.00th=[ 22], 10.00th=[ 22], 20.00th=[ 24], 00:16:59.439 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 30], 60.00th=[ 41], 00:16:59.439 | 70.00th=[ 65], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 106], 00:16:59.439 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 126], 99.95th=[ 131], 00:16:59.439 | 99.99th=[ 131] 00:16:59.439 write: IOPS=1562, BW=6250KiB/s (6399kB/s)(6312KiB/1010msec); 0 zone resets 00:16:59.439 slat (usec): min=18, max=29673, avg=275.39, stdev=1836.39 00:16:59.439 clat (msec): min=4, max=116, avg=33.38, stdev=20.83 00:16:59.439 lat (msec): min=11, max=116, avg=33.66, stdev=20.99 00:16:59.439 clat percentiles (msec): 00:16:59.439 | 1.00th=[ 13], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:16:59.439 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 32], 00:16:59.439 | 70.00th=[ 36], 80.00th=[ 48], 90.00th=[ 64], 95.00th=[ 84], 00:16:59.439 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 115], 99.95th=[ 116], 00:16:59.439 | 99.99th=[ 116] 00:16:59.439 bw ( KiB/s): min= 4096, max= 8208, per=12.56%, avg=6152.00, stdev=2907.62, samples=2 00:16:59.439 iops : min= 1024, max= 2052, avg=1538.00, stdev=726.91, samples=2 00:16:59.439 lat (msec) : 10=0.03%, 20=21.90%, 50=52.22%, 100=19.81%, 250=6.04% 00:16:59.439 cpu : usr=1.59%, sys=4.96%, ctx=119, majf=0, minf=9 00:16:59.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:16:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.439 issued rwts: total=1536,1578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.439 job1: (groupid=0, jobs=1): err= 0: pid=76819: Thu Nov 21 02:33:39 2024 00:16:59.439 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:16:59.439 slat (usec): min=7, max=3663, avg=102.73, stdev=467.34 00:16:59.439 clat (usec): min=9404, max=16209, avg=13534.87, stdev=942.33 00:16:59.439 lat (usec): min=9921, max=19145, avg=13637.61, stdev=842.17 00:16:59.439 clat percentiles (usec): 00:16:59.439 | 1.00th=[10683], 5.00th=[11469], 10.00th=[12256], 20.00th=[13173], 00:16:59.439 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13829], 00:16:59.439 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14484], 95.00th=[14746], 00:16:59.439 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16057], 99.95th=[16057], 00:16:59.439 | 99.99th=[16188] 00:16:59.439 write: IOPS=4232, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1002msec); 0 zone resets 00:16:59.439 slat (usec): min=7, max=20759, avg=128.92, stdev=790.07 00:16:59.439 clat (usec): min=451, max=80162, avg=16295.68, stdev=9375.56 00:16:59.439 lat (usec): min=3629, max=80192, avg=16424.59, stdev=9453.01 00:16:59.439 clat percentiles (usec): 00:16:59.439 | 1.00th=[ 7963], 5.00th=[11338], 10.00th=[11863], 20.00th=[12649], 00:16:59.439 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:16:59.439 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16319], 95.00th=[33817], 00:16:59.439 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63177], 99.95th=[66323], 00:16:59.439 | 99.99th=[80217] 00:16:59.439 bw ( KiB/s): min=16384, max=16528, per=33.60%, avg=16456.00, stdev=101.82, samples=2 00:16:59.439 iops : min= 4096, max= 4132, avg=4114.00, stdev=25.46, samples=2 00:16:59.439 lat (usec) : 500=0.01% 00:16:59.439 lat (msec) : 4=0.14%, 10=0.85%, 20=94.96%, 50=2.51%, 100=1.52% 00:16:59.439 cpu : usr=5.29%, sys=10.59%, ctx=598, majf=0, minf=22 00:16:59.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.439 issued rwts: total=4096,4241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.439 job2: (groupid=0, jobs=1): err= 0: pid=76820: Thu Nov 21 02:33:39 2024 00:16:59.439 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1019msec) 00:16:59.439 slat (usec): min=6, max=17365, avg=157.07, stdev=1080.32 00:16:59.439 clat (usec): min=6721, max=38588, avg=20091.83, stdev=5096.73 00:16:59.439 lat (usec): min=6737, max=38609, avg=20248.90, stdev=5154.49 00:16:59.439 clat percentiles (usec): 00:16:59.439 | 1.00th=[ 7767], 5.00th=[13173], 10.00th=[15139], 20.00th=[16319], 00:16:59.439 | 30.00th=[17695], 40.00th=[18744], 50.00th=[19530], 60.00th=[19530], 00:16:59.439 | 70.00th=[20579], 80.00th=[23725], 90.00th=[26346], 95.00th=[30802], 00:16:59.439 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:16:59.439 | 99.99th=[38536] 00:16:59.439 write: IOPS=3517, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1019msec); 0 zone resets 00:16:59.439 slat (usec): min=5, max=16885, avg=135.76, stdev=870.91 00:16:59.439 clat (usec): min=5669, max=40080, avg=18769.16, stdev=4682.88 00:16:59.439 lat (usec): min=5695, max=40093, avg=18904.92, stdev=4769.57 00:16:59.439 clat percentiles (usec): 00:16:59.439 | 1.00th=[ 6849], 5.00th=[ 9110], 10.00th=[13173], 20.00th=[15533], 00:16:59.439 | 30.00th=[16712], 40.00th=[19268], 50.00th=[20055], 60.00th=[20579], 00:16:59.439 | 70.00th=[21103], 80.00th=[21627], 90.00th=[21890], 95.00th=[22414], 00:16:59.439 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:16:59.439 | 99.99th=[40109] 00:16:59.439 bw ( KiB/s): min=13000, max=14733, per=28.32%, avg=13866.50, stdev=1225.42, samples=2 00:16:59.439 iops : min= 3250, max= 3683, avg=3466.50, stdev=306.18, samples=2 00:16:59.439 lat (msec) : 10=3.90%, 20=53.86%, 50=42.24% 00:16:59.439 cpu : usr=4.52%, sys=8.25%, ctx=359, majf=0, minf=13 00:16:59.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.439 issued rwts: total=3078,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.439 job3: (groupid=0, jobs=1): err= 0: pid=76821: Thu Nov 21 02:33:39 2024 00:16:59.439 read: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1011msec) 00:16:59.439 slat (usec): min=6, max=27701, avg=164.34, stdev=1242.75 00:16:59.439 clat (usec): min=9328, max=59434, avg=20596.12, stdev=6941.33 00:16:59.439 lat (usec): min=9342, max=59466, avg=20760.46, stdev=7036.58 00:16:59.439 clat percentiles (usec): 00:16:59.439 | 1.00th=[10028], 5.00th=[13173], 10.00th=[13829], 20.00th=[15795], 00:16:59.439 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18744], 60.00th=[19530], 00:16:59.439 | 70.00th=[21890], 80.00th=[25035], 90.00th=[31589], 95.00th=[35914], 00:16:59.439 | 99.00th=[43254], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:16:59.439 | 99.99th=[59507] 00:16:59.439 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:16:59.439 slat (usec): min=5, max=17382, avg=169.24, stdev=1075.80 00:16:59.439 clat (usec): min=3323, max=49683, avg=23101.18, stdev=10936.40 00:16:59.439 lat (usec): min=3347, max=49693, avg=23270.42, stdev=11040.84 00:16:59.439 clat percentiles (usec): 00:16:59.439 | 1.00th=[ 8979], 5.00th=[13042], 10.00th=[13566], 20.00th=[14615], 00:16:59.439 | 30.00th=[15664], 40.00th=[16581], 50.00th=[18744], 60.00th=[20317], 00:16:59.439 | 70.00th=[21890], 80.00th=[33817], 90.00th=[42206], 95.00th=[46400], 00:16:59.439 | 99.00th=[49021], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:16:59.439 | 99.99th=[49546] 00:16:59.439 bw ( KiB/s): min=12272, max=12304, per=25.09%, avg=12288.00, stdev=22.63, samples=2 00:16:59.439 iops : min= 3068, max= 3076, avg=3072.00, stdev= 5.66, samples=2 00:16:59.439 lat (msec) : 4=0.15%, 10=1.25%, 20=57.46%, 50=41.12%, 100=0.02% 00:16:59.439 cpu : usr=3.47%, sys=7.43%, ctx=244, majf=0, minf=5 00:16:59.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:59.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:59.439 issued rwts: total=2770,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:59.439 00:16:59.439 Run status group 0 (all jobs): 00:16:59.439 READ: bw=44.0MiB/s (46.1MB/s), 6083KiB/s-16.0MiB/s (6229kB/s-16.7MB/s), io=44.8MiB (47.0MB), run=1002-1019msec 00:16:59.439 WRITE: bw=47.8MiB/s (50.1MB/s), 6250KiB/s-16.5MiB/s (6399kB/s-17.3MB/s), io=48.7MiB (51.1MB), run=1002-1019msec 00:16:59.439 00:16:59.439 Disk stats (read/write): 00:16:59.439 nvme0n1: ios=1074/1426, merge=0/0, ticks=19803/14683, in_queue=34486, util=89.58% 00:16:59.439 nvme0n2: ios=3555/3584, merge=0/0, ticks=11180/15020, in_queue=26200, util=88.78% 00:16:59.439 nvme0n3: ios=2564/3071, merge=0/0, ticks=48123/54430, in_queue=102553, util=89.20% 00:16:59.439 nvme0n4: ios=2581/2655, merge=0/0, ticks=49262/53108, in_queue=102370, util=90.58% 00:16:59.439 02:33:39 -- target/fio.sh@55 -- # sync 00:16:59.439 02:33:39 -- target/fio.sh@59 -- # fio_pid=76836 00:16:59.439 02:33:39 -- target/fio.sh@61 -- # sleep 3 00:16:59.439 02:33:39 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:59.439 [global] 00:16:59.439 thread=1 00:16:59.439 invalidate=1 00:16:59.439 rw=read 00:16:59.439 time_based=1 00:16:59.439 runtime=10 00:16:59.439 ioengine=libaio 00:16:59.439 direct=1 00:16:59.439 bs=4096 00:16:59.439 iodepth=1 00:16:59.439 norandommap=1 00:16:59.439 numjobs=1 00:16:59.439 00:16:59.439 [job0] 00:16:59.439 filename=/dev/nvme0n1 00:16:59.439 [job1] 00:16:59.439 filename=/dev/nvme0n2 00:16:59.439 [job2] 00:16:59.439 filename=/dev/nvme0n3 00:16:59.439 [job3] 00:16:59.439 filename=/dev/nvme0n4 00:16:59.439 Could not set queue depth (nvme0n1) 00:16:59.439 Could not set queue depth (nvme0n2) 00:16:59.440 Could not set queue depth (nvme0n3) 00:16:59.440 Could not set queue depth (nvme0n4) 00:16:59.440 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.440 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.440 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.440 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:59.440 fio-3.35 00:16:59.440 Starting 4 threads 00:17:02.722 02:33:42 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:02.722 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43925504, buflen=4096 00:17:02.722 fio: pid=76880, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:02.722 02:33:43 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:02.981 fio: pid=76879, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:02.981 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33734656, buflen=4096 00:17:02.981 02:33:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.981 02:33:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:03.239 fio: pid=76877, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:03.239 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46907392, buflen=4096 00:17:03.239 02:33:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.239 02:33:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:03.239 fio: pid=76878, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:03.239 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=43671552, buflen=4096 00:17:03.498 00:17:03.498 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76877: Thu Nov 21 02:33:43 2024 00:17:03.498 read: IOPS=3285, BW=12.8MiB/s (13.5MB/s)(44.7MiB/3486msec) 00:17:03.498 slat (usec): min=12, max=15012, avg=23.60, stdev=212.36 00:17:03.498 clat (usec): min=140, max=3133, avg=278.85, stdev=53.06 00:17:03.498 lat (usec): min=153, max=15345, avg=302.45, stdev=219.47 00:17:03.498 clat percentiles (usec): 00:17:03.498 | 1.00th=[ 167], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 241], 00:17:03.498 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 277], 60.00th=[ 293], 00:17:03.498 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 343], 00:17:03.498 | 99.00th=[ 371], 99.50th=[ 383], 99.90th=[ 510], 99.95th=[ 758], 00:17:03.498 | 99.99th=[ 1205] 00:17:03.498 bw ( KiB/s): min=11472, max=14792, per=29.21%, avg=12869.50, stdev=1368.10, samples=6 00:17:03.498 iops : min= 2868, max= 3698, avg=3217.33, stdev=341.97, samples=6 00:17:03.498 lat (usec) : 250=30.08%, 500=69.81%, 750=0.05%, 1000=0.02% 00:17:03.498 lat (msec) : 2=0.03%, 4=0.01% 00:17:03.498 cpu : usr=1.12%, sys=5.14%, ctx=11459, majf=0, minf=1 00:17:03.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.498 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.498 issued rwts: total=11453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.498 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76878: Thu Nov 21 02:33:43 2024 00:17:03.498 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(41.6MiB/3729msec) 00:17:03.498 slat (usec): min=9, max=14378, avg=21.64, stdev=231.13 00:17:03.498 clat (usec): min=106, max=4156, avg=326.48, stdev=118.52 00:17:03.498 lat (usec): min=148, max=14686, avg=348.12, stdev=259.65 00:17:03.498 clat percentiles (usec): 00:17:03.498 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 260], 00:17:03.498 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 326], 00:17:03.498 | 70.00th=[ 343], 80.00th=[ 396], 90.00th=[ 498], 95.00th=[ 515], 00:17:03.498 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 725], 99.95th=[ 1500], 00:17:03.498 | 99.99th=[ 3326] 00:17:03.498 bw ( KiB/s): min= 7880, max=13059, per=25.03%, avg=11029.00, stdev=1999.87, samples=7 00:17:03.498 iops : min= 1970, max= 3264, avg=2757.14, stdev=499.85, samples=7 00:17:03.498 lat (usec) : 250=17.32%, 500=73.51%, 750=9.08%, 1000=0.03% 00:17:03.498 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:17:03.498 cpu : usr=1.05%, sys=3.65%, ctx=10689, majf=0, minf=1 00:17:03.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.498 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.498 issued rwts: total=10663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.498 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76879: Thu Nov 21 02:33:43 2024 00:17:03.498 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(32.2MiB/3231msec) 00:17:03.498 slat (usec): min=11, max=7639, avg=23.88, stdev=115.99 00:17:03.498 clat (usec): min=164, max=2594, avg=366.37, stdev=97.20 00:17:03.498 lat (usec): min=183, max=7973, avg=390.25, stdev=149.93 00:17:03.498 clat percentiles (usec): 00:17:03.498 | 1.00th=[ 208], 5.00th=[ 249], 10.00th=[ 265], 20.00th=[ 293], 00:17:03.498 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 359], 00:17:03.498 | 70.00th=[ 388], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[ 523], 00:17:03.498 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 693], 99.95th=[ 775], 00:17:03.498 | 99.99th=[ 2606] 00:17:03.498 bw ( KiB/s): min= 7984, max=12368, per=23.50%, avg=10353.33, stdev=1809.36, samples=6 00:17:03.498 iops : min= 1996, max= 3092, avg=2588.33, stdev=452.34, samples=6 00:17:03.498 lat (usec) : 250=5.27%, 500=82.63%, 750=12.03%, 1000=0.02% 00:17:03.498 lat (msec) : 2=0.01%, 4=0.02% 00:17:03.498 cpu : usr=1.15%, sys=4.74%, ctx=8267, majf=0, minf=2 00:17:03.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.498 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.498 issued rwts: total=8237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.498 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76880: Thu Nov 21 02:33:43 2024 00:17:03.498 read: IOPS=3652, BW=14.3MiB/s (15.0MB/s)(41.9MiB/2936msec) 00:17:03.498 slat (nsec): min=12465, max=76654, avg=17710.88, stdev=5473.31 00:17:03.498 clat (usec): min=130, max=26311, avg=254.38, stdev=271.88 00:17:03.498 lat (usec): min=145, max=26340, avg=272.09, stdev=272.59 00:17:03.498 clat percentiles (usec): 00:17:03.498 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:17:03.499 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 260], 60.00th=[ 297], 00:17:03.499 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 388], 00:17:03.499 | 99.00th=[ 437], 99.50th=[ 474], 99.90th=[ 652], 99.95th=[ 1123], 00:17:03.499 | 99.99th=[ 3589] 00:17:03.499 bw ( KiB/s): min=10272, max=21496, per=30.02%, avg=13227.20, stdev=4678.54, samples=5 00:17:03.499 iops : min= 2568, max= 5374, avg=3306.80, stdev=1169.63, samples=5 00:17:03.499 lat (usec) : 250=48.45%, 500=51.14%, 750=0.33%, 1000=0.02% 00:17:03.499 lat (msec) : 2=0.03%, 4=0.02%, 50=0.01% 00:17:03.499 cpu : usr=1.19%, sys=5.11%, ctx=10726, majf=0, minf=2 00:17:03.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.499 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.499 issued rwts: total=10725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.499 00:17:03.499 Run status group 0 (all jobs): 00:17:03.499 READ: bw=43.0MiB/s (45.1MB/s), 9.96MiB/s-14.3MiB/s (10.4MB/s-15.0MB/s), io=160MiB (168MB), run=2936-3729msec 00:17:03.499 00:17:03.499 Disk stats (read/write): 00:17:03.499 nvme0n1: ios=11008/0, merge=0/0, ticks=3146/0, in_queue=3146, util=95.19% 00:17:03.499 nvme0n2: ios=10009/0, merge=0/0, ticks=3375/0, in_queue=3375, util=95.29% 00:17:03.499 nvme0n3: ios=8003/0, merge=0/0, ticks=2937/0, in_queue=2937, util=96.43% 00:17:03.499 nvme0n4: ios=10344/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.76% 00:17:03.499 02:33:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.499 02:33:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:03.757 02:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:03.757 02:33:44 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:04.016 02:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.016 02:33:44 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:04.274 02:33:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.274 02:33:44 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:04.532 02:33:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:04.532 02:33:45 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:04.790 02:33:45 -- target/fio.sh@69 -- # fio_status=0 00:17:04.790 02:33:45 -- target/fio.sh@70 -- # wait 76836 00:17:04.790 02:33:45 -- target/fio.sh@70 -- # fio_status=4 00:17:04.790 02:33:45 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.790 02:33:45 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.790 02:33:45 -- common/autotest_common.sh@1208 -- # local i=0 00:17:04.790 02:33:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:04.790 02:33:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.790 02:33:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:04.790 02:33:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.790 02:33:45 -- common/autotest_common.sh@1220 -- # return 0 00:17:04.790 02:33:45 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:04.790 nvmf hotplug test: fio failed as expected 00:17:04.790 02:33:45 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:04.790 02:33:45 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.047 02:33:45 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:05.047 02:33:45 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:05.047 02:33:45 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:05.047 02:33:45 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:05.047 02:33:45 -- target/fio.sh@91 -- # nvmftestfini 00:17:05.047 02:33:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:05.047 02:33:45 -- nvmf/common.sh@116 -- # sync 00:17:05.047 02:33:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:05.047 02:33:45 -- nvmf/common.sh@119 -- # set +e 00:17:05.047 02:33:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:05.047 02:33:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:05.047 rmmod nvme_tcp 00:17:05.047 rmmod nvme_fabrics 00:17:05.047 rmmod nvme_keyring 00:17:05.306 02:33:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:05.306 02:33:45 -- nvmf/common.sh@123 -- # set -e 00:17:05.306 02:33:45 -- nvmf/common.sh@124 -- # return 0 00:17:05.306 02:33:45 -- nvmf/common.sh@477 -- # '[' -n 76348 ']' 00:17:05.306 02:33:45 -- nvmf/common.sh@478 -- # killprocess 76348 00:17:05.306 02:33:45 -- common/autotest_common.sh@936 -- # '[' -z 76348 ']' 00:17:05.306 02:33:45 -- common/autotest_common.sh@940 -- # kill -0 76348 00:17:05.306 02:33:45 -- common/autotest_common.sh@941 -- # uname 00:17:05.306 02:33:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.306 02:33:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76348 00:17:05.306 02:33:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.306 02:33:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.306 killing process with pid 76348 00:17:05.306 02:33:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76348' 00:17:05.306 02:33:45 -- common/autotest_common.sh@955 -- # kill 76348 00:17:05.306 02:33:45 -- common/autotest_common.sh@960 -- # wait 76348 00:17:05.565 02:33:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:05.565 02:33:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:05.565 02:33:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:05.565 02:33:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.565 02:33:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:05.565 02:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.565 02:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.565 02:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.565 02:33:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:05.565 00:17:05.565 real 0m19.555s 00:17:05.565 user 1m15.172s 00:17:05.565 sys 0m7.660s 00:17:05.565 02:33:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:05.565 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:17:05.565 ************************************ 00:17:05.565 END TEST nvmf_fio_target 00:17:05.565 ************************************ 00:17:05.565 02:33:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:05.565 02:33:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:05.565 02:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.565 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:17:05.565 ************************************ 00:17:05.565 START TEST nvmf_bdevio 00:17:05.565 ************************************ 00:17:05.565 02:33:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:05.824 * Looking for test storage... 00:17:05.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:05.824 02:33:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:05.824 02:33:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:05.824 02:33:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:05.824 02:33:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:05.824 02:33:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:05.824 02:33:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:05.824 02:33:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:05.824 02:33:46 -- scripts/common.sh@335 -- # IFS=.-: 00:17:05.824 02:33:46 -- scripts/common.sh@335 -- # read -ra ver1 00:17:05.824 02:33:46 -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.824 02:33:46 -- scripts/common.sh@336 -- # read -ra ver2 00:17:05.824 02:33:46 -- scripts/common.sh@337 -- # local 'op=<' 00:17:05.824 02:33:46 -- scripts/common.sh@339 -- # ver1_l=2 00:17:05.824 02:33:46 -- scripts/common.sh@340 -- # ver2_l=1 00:17:05.824 02:33:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:05.824 02:33:46 -- scripts/common.sh@343 -- # case "$op" in 00:17:05.824 02:33:46 -- scripts/common.sh@344 -- # : 1 00:17:05.824 02:33:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:05.824 02:33:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.824 02:33:46 -- scripts/common.sh@364 -- # decimal 1 00:17:05.824 02:33:46 -- scripts/common.sh@352 -- # local d=1 00:17:05.824 02:33:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.824 02:33:46 -- scripts/common.sh@354 -- # echo 1 00:17:05.824 02:33:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:05.824 02:33:46 -- scripts/common.sh@365 -- # decimal 2 00:17:05.824 02:33:46 -- scripts/common.sh@352 -- # local d=2 00:17:05.824 02:33:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.824 02:33:46 -- scripts/common.sh@354 -- # echo 2 00:17:05.824 02:33:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:05.824 02:33:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:05.824 02:33:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:05.824 02:33:46 -- scripts/common.sh@367 -- # return 0 00:17:05.824 02:33:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.824 02:33:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:05.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.824 --rc genhtml_branch_coverage=1 00:17:05.825 --rc genhtml_function_coverage=1 00:17:05.825 --rc genhtml_legend=1 00:17:05.825 --rc geninfo_all_blocks=1 00:17:05.825 --rc geninfo_unexecuted_blocks=1 00:17:05.825 00:17:05.825 ' 00:17:05.825 02:33:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.825 --rc genhtml_branch_coverage=1 00:17:05.825 --rc genhtml_function_coverage=1 00:17:05.825 --rc genhtml_legend=1 00:17:05.825 --rc geninfo_all_blocks=1 00:17:05.825 --rc geninfo_unexecuted_blocks=1 00:17:05.825 00:17:05.825 ' 00:17:05.825 02:33:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.825 --rc genhtml_branch_coverage=1 00:17:05.825 --rc genhtml_function_coverage=1 00:17:05.825 --rc genhtml_legend=1 00:17:05.825 --rc geninfo_all_blocks=1 00:17:05.825 --rc geninfo_unexecuted_blocks=1 00:17:05.825 00:17:05.825 ' 00:17:05.825 02:33:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.825 --rc genhtml_branch_coverage=1 00:17:05.825 --rc genhtml_function_coverage=1 00:17:05.825 --rc genhtml_legend=1 00:17:05.825 --rc geninfo_all_blocks=1 00:17:05.825 --rc geninfo_unexecuted_blocks=1 00:17:05.825 00:17:05.825 ' 00:17:05.825 02:33:46 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.825 02:33:46 -- nvmf/common.sh@7 -- # uname -s 00:17:05.825 02:33:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.825 02:33:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.825 02:33:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.825 02:33:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.825 02:33:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.825 02:33:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.825 02:33:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.825 02:33:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.825 02:33:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.825 02:33:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.825 02:33:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:17:05.825 02:33:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:17:05.825 02:33:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.825 02:33:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.825 02:33:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.825 02:33:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.825 02:33:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.825 02:33:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.825 02:33:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.825 02:33:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.825 02:33:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.825 02:33:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.825 02:33:46 -- paths/export.sh@5 -- # export PATH 00:17:05.825 02:33:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.825 02:33:46 -- nvmf/common.sh@46 -- # : 0 00:17:05.825 02:33:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:05.825 02:33:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:05.825 02:33:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:05.825 02:33:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.825 02:33:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.825 02:33:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:05.825 02:33:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:05.825 02:33:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:05.825 02:33:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.825 02:33:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.825 02:33:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:05.825 02:33:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:05.825 02:33:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.825 02:33:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:05.825 02:33:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:05.825 02:33:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:05.825 02:33:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.825 02:33:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.825 02:33:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.825 02:33:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:05.825 02:33:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:05.825 02:33:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:05.825 02:33:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:05.825 02:33:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:05.825 02:33:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:05.825 02:33:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.825 02:33:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.825 02:33:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.825 02:33:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:05.825 02:33:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.825 02:33:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.825 02:33:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.825 02:33:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.825 02:33:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.825 02:33:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.825 02:33:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.825 02:33:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.825 02:33:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:05.825 02:33:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:05.825 Cannot find device "nvmf_tgt_br" 00:17:05.825 02:33:46 -- nvmf/common.sh@154 -- # true 00:17:05.825 02:33:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.825 Cannot find device "nvmf_tgt_br2" 00:17:05.825 02:33:46 -- nvmf/common.sh@155 -- # true 00:17:05.825 02:33:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:05.825 02:33:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:05.825 Cannot find device "nvmf_tgt_br" 00:17:05.825 02:33:46 -- nvmf/common.sh@157 -- # true 00:17:05.825 02:33:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:05.825 Cannot find device "nvmf_tgt_br2" 00:17:05.825 02:33:46 -- nvmf/common.sh@158 -- # true 00:17:05.825 02:33:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:06.084 02:33:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:06.084 02:33:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.084 02:33:46 -- nvmf/common.sh@161 -- # true 00:17:06.084 02:33:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.084 02:33:46 -- nvmf/common.sh@162 -- # true 00:17:06.084 02:33:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.084 02:33:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.084 02:33:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.084 02:33:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.084 02:33:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.084 02:33:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.084 02:33:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:06.084 02:33:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.084 02:33:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:06.084 02:33:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:06.084 02:33:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:06.084 02:33:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:06.084 02:33:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:06.084 02:33:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.084 02:33:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.084 02:33:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.084 02:33:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:06.084 02:33:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:06.084 02:33:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.084 02:33:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.084 02:33:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.084 02:33:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.084 02:33:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.084 02:33:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:06.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:17:06.084 00:17:06.084 --- 10.0.0.2 ping statistics --- 00:17:06.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.084 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:06.084 02:33:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:06.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:06.084 00:17:06.084 --- 10.0.0.3 ping statistics --- 00:17:06.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.085 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:06.085 02:33:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:06.085 00:17:06.085 --- 10.0.0.1 ping statistics --- 00:17:06.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.085 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:06.085 02:33:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.085 02:33:46 -- nvmf/common.sh@421 -- # return 0 00:17:06.085 02:33:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:06.085 02:33:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.085 02:33:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:06.085 02:33:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:06.085 02:33:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.085 02:33:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:06.085 02:33:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:06.085 02:33:46 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:06.085 02:33:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:06.085 02:33:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:06.085 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:17:06.085 02:33:46 -- nvmf/common.sh@469 -- # nvmfpid=77217 00:17:06.085 02:33:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:06.085 02:33:46 -- nvmf/common.sh@470 -- # waitforlisten 77217 00:17:06.085 02:33:46 -- common/autotest_common.sh@829 -- # '[' -z 77217 ']' 00:17:06.085 02:33:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.085 02:33:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.085 02:33:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.085 02:33:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.085 02:33:46 -- common/autotest_common.sh@10 -- # set +x 00:17:06.343 [2024-11-21 02:33:46.768220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:06.343 [2024-11-21 02:33:46.768301] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.343 [2024-11-21 02:33:46.910732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.601 [2024-11-21 02:33:47.020357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:06.602 [2024-11-21 02:33:47.020564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.602 [2024-11-21 02:33:47.020582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.602 [2024-11-21 02:33:47.020596] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.602 [2024-11-21 02:33:47.020782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.602 [2024-11-21 02:33:47.021423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.602 [2024-11-21 02:33:47.021565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:06.602 [2024-11-21 02:33:47.021576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.170 02:33:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.170 02:33:47 -- common/autotest_common.sh@862 -- # return 0 00:17:07.170 02:33:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:07.170 02:33:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:07.170 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.170 02:33:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.170 02:33:47 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.170 02:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.170 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.170 [2024-11-21 02:33:47.813402] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.428 02:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.428 02:33:47 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.428 02:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.428 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.428 Malloc0 00:17:07.428 02:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.428 02:33:47 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.428 02:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.428 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.428 02:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.428 02:33:47 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.428 02:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.428 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.428 02:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.428 02:33:47 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.428 02:33:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.428 02:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.428 [2024-11-21 02:33:47.883565] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.428 02:33:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.428 02:33:47 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:07.428 02:33:47 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.428 02:33:47 -- nvmf/common.sh@520 -- # config=() 00:17:07.428 02:33:47 -- nvmf/common.sh@520 -- # local subsystem config 00:17:07.428 02:33:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:07.428 02:33:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:07.428 { 00:17:07.428 "params": { 00:17:07.429 "name": "Nvme$subsystem", 00:17:07.429 "trtype": "$TEST_TRANSPORT", 00:17:07.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.429 "adrfam": "ipv4", 00:17:07.429 "trsvcid": "$NVMF_PORT", 00:17:07.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.429 "hdgst": ${hdgst:-false}, 00:17:07.429 "ddgst": ${ddgst:-false} 00:17:07.429 }, 00:17:07.429 "method": "bdev_nvme_attach_controller" 00:17:07.429 } 00:17:07.429 EOF 00:17:07.429 )") 00:17:07.429 02:33:47 -- nvmf/common.sh@542 -- # cat 00:17:07.429 02:33:47 -- nvmf/common.sh@544 -- # jq . 00:17:07.429 02:33:47 -- nvmf/common.sh@545 -- # IFS=, 00:17:07.429 02:33:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:07.429 "params": { 00:17:07.429 "name": "Nvme1", 00:17:07.429 "trtype": "tcp", 00:17:07.429 "traddr": "10.0.0.2", 00:17:07.429 "adrfam": "ipv4", 00:17:07.429 "trsvcid": "4420", 00:17:07.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.429 "hdgst": false, 00:17:07.429 "ddgst": false 00:17:07.429 }, 00:17:07.429 "method": "bdev_nvme_attach_controller" 00:17:07.429 }' 00:17:07.429 [2024-11-21 02:33:47.950810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.429 [2024-11-21 02:33:47.950913] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77271 ] 00:17:07.687 [2024-11-21 02:33:48.090205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.687 [2024-11-21 02:33:48.202615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.687 [2024-11-21 02:33:48.202804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.687 [2024-11-21 02:33:48.202806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.945 [2024-11-21 02:33:48.409904] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:07.945 [2024-11-21 02:33:48.409994] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:07.945 I/O targets: 00:17:07.945 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.945 00:17:07.945 00:17:07.945 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.945 http://cunit.sourceforge.net/ 00:17:07.945 00:17:07.945 00:17:07.945 Suite: bdevio tests on: Nvme1n1 00:17:07.945 Test: blockdev write read block ...passed 00:17:07.945 Test: blockdev write zeroes read block ...passed 00:17:07.945 Test: blockdev write zeroes read no split ...passed 00:17:07.945 Test: blockdev write zeroes read split ...passed 00:17:07.945 Test: blockdev write zeroes read split partial ...passed 00:17:07.945 Test: blockdev reset ...[2024-11-21 02:33:48.527552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.945 [2024-11-21 02:33:48.527641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1703910 (9): Bad file descriptor 00:17:07.945 [2024-11-21 02:33:48.539970] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:07.945 passed 00:17:07.945 Test: blockdev write read 8 blocks ...passed 00:17:07.945 Test: blockdev write read size > 128k ...passed 00:17:07.945 Test: blockdev write read invalid size ...passed 00:17:07.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:07.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:07.945 Test: blockdev write read max offset ...passed 00:17:08.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:08.204 Test: blockdev writev readv 8 blocks ...passed 00:17:08.204 Test: blockdev writev readv 30 x 1block ...passed 00:17:08.204 Test: blockdev writev readv block ...passed 00:17:08.204 Test: blockdev writev readv size > 128k ...passed 00:17:08.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:08.204 Test: blockdev comparev and writev ...[2024-11-21 02:33:48.714732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.714793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.714823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.714832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.715326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.715350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.715365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.715375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.715732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.715765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.715781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.715790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.716411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.716450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.716466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.204 [2024-11-21 02:33:48.716476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.204 passed 00:17:08.204 Test: blockdev nvme passthru rw ...passed 00:17:08.204 Test: blockdev nvme passthru vendor specific ...[2024-11-21 02:33:48.800080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.204 [2024-11-21 02:33:48.800106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.800265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.204 [2024-11-21 02:33:48.800280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.800399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.204 [2024-11-21 02:33:48.800425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.204 [2024-11-21 02:33:48.800566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.204 [2024-11-21 02:33:48.800589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:08.204 passed 00:17:08.204 Test: blockdev nvme admin passthru ...passed 00:17:08.462 Test: blockdev copy ...passed 00:17:08.462 00:17:08.462 Run Summary: Type Total Ran Passed Failed Inactive 00:17:08.462 suites 1 1 n/a 0 0 00:17:08.462 tests 23 23 23 0 0 00:17:08.462 asserts 152 152 152 0 n/a 00:17:08.462 00:17:08.462 Elapsed time = 0.897 seconds 00:17:08.792 02:33:49 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.792 02:33:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.792 02:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:08.792 02:33:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.792 02:33:49 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:08.792 02:33:49 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:08.792 02:33:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:08.792 02:33:49 -- nvmf/common.sh@116 -- # sync 00:17:08.792 02:33:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:08.792 02:33:49 -- nvmf/common.sh@119 -- # set +e 00:17:08.792 02:33:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:08.792 02:33:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:08.792 rmmod nvme_tcp 00:17:08.792 rmmod nvme_fabrics 00:17:08.792 rmmod nvme_keyring 00:17:08.792 02:33:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:08.792 02:33:49 -- nvmf/common.sh@123 -- # set -e 00:17:08.792 02:33:49 -- nvmf/common.sh@124 -- # return 0 00:17:08.792 02:33:49 -- nvmf/common.sh@477 -- # '[' -n 77217 ']' 00:17:08.792 02:33:49 -- nvmf/common.sh@478 -- # killprocess 77217 00:17:08.792 02:33:49 -- common/autotest_common.sh@936 -- # '[' -z 77217 ']' 00:17:08.792 02:33:49 -- common/autotest_common.sh@940 -- # kill -0 77217 00:17:08.792 02:33:49 -- common/autotest_common.sh@941 -- # uname 00:17:08.792 02:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.792 02:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77217 00:17:08.792 killing process with pid 77217 00:17:08.792 02:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:08.792 02:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:08.792 02:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77217' 00:17:08.792 02:33:49 -- common/autotest_common.sh@955 -- # kill 77217 00:17:08.792 02:33:49 -- common/autotest_common.sh@960 -- # wait 77217 00:17:09.058 02:33:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:09.058 02:33:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:09.058 02:33:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:09.058 02:33:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.058 02:33:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:09.058 02:33:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.058 02:33:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.058 02:33:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.058 02:33:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:09.058 00:17:09.058 real 0m3.484s 00:17:09.058 user 0m12.506s 00:17:09.058 sys 0m0.938s 00:17:09.058 02:33:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:09.058 02:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:09.058 ************************************ 00:17:09.058 END TEST nvmf_bdevio 00:17:09.058 ************************************ 00:17:09.058 02:33:49 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:17:09.058 02:33:49 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:09.058 02:33:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:09.058 02:33:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.058 02:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:09.058 ************************************ 00:17:09.058 START TEST nvmf_bdevio_no_huge 00:17:09.058 ************************************ 00:17:09.058 02:33:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:09.331 * Looking for test storage... 00:17:09.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:09.331 02:33:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:09.331 02:33:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:09.331 02:33:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:09.331 02:33:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:09.331 02:33:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:09.331 02:33:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:09.331 02:33:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:09.331 02:33:49 -- scripts/common.sh@335 -- # IFS=.-: 00:17:09.331 02:33:49 -- scripts/common.sh@335 -- # read -ra ver1 00:17:09.331 02:33:49 -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.331 02:33:49 -- scripts/common.sh@336 -- # read -ra ver2 00:17:09.331 02:33:49 -- scripts/common.sh@337 -- # local 'op=<' 00:17:09.331 02:33:49 -- scripts/common.sh@339 -- # ver1_l=2 00:17:09.331 02:33:49 -- scripts/common.sh@340 -- # ver2_l=1 00:17:09.331 02:33:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:09.331 02:33:49 -- scripts/common.sh@343 -- # case "$op" in 00:17:09.331 02:33:49 -- scripts/common.sh@344 -- # : 1 00:17:09.331 02:33:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:09.331 02:33:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.331 02:33:49 -- scripts/common.sh@364 -- # decimal 1 00:17:09.331 02:33:49 -- scripts/common.sh@352 -- # local d=1 00:17:09.331 02:33:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.331 02:33:49 -- scripts/common.sh@354 -- # echo 1 00:17:09.331 02:33:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:09.331 02:33:49 -- scripts/common.sh@365 -- # decimal 2 00:17:09.331 02:33:49 -- scripts/common.sh@352 -- # local d=2 00:17:09.331 02:33:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.331 02:33:49 -- scripts/common.sh@354 -- # echo 2 00:17:09.331 02:33:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:09.331 02:33:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:09.331 02:33:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:09.331 02:33:49 -- scripts/common.sh@367 -- # return 0 00:17:09.331 02:33:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.331 02:33:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:09.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.331 --rc genhtml_branch_coverage=1 00:17:09.331 --rc genhtml_function_coverage=1 00:17:09.331 --rc genhtml_legend=1 00:17:09.331 --rc geninfo_all_blocks=1 00:17:09.331 --rc geninfo_unexecuted_blocks=1 00:17:09.331 00:17:09.331 ' 00:17:09.331 02:33:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:09.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.331 --rc genhtml_branch_coverage=1 00:17:09.331 --rc genhtml_function_coverage=1 00:17:09.331 --rc genhtml_legend=1 00:17:09.331 --rc geninfo_all_blocks=1 00:17:09.332 --rc geninfo_unexecuted_blocks=1 00:17:09.332 00:17:09.332 ' 00:17:09.332 02:33:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:09.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.332 --rc genhtml_branch_coverage=1 00:17:09.332 --rc genhtml_function_coverage=1 00:17:09.332 --rc genhtml_legend=1 00:17:09.332 --rc geninfo_all_blocks=1 00:17:09.332 --rc geninfo_unexecuted_blocks=1 00:17:09.332 00:17:09.332 ' 00:17:09.332 02:33:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:09.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.332 --rc genhtml_branch_coverage=1 00:17:09.332 --rc genhtml_function_coverage=1 00:17:09.332 --rc genhtml_legend=1 00:17:09.332 --rc geninfo_all_blocks=1 00:17:09.332 --rc geninfo_unexecuted_blocks=1 00:17:09.332 00:17:09.332 ' 00:17:09.332 02:33:49 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:09.332 02:33:49 -- nvmf/common.sh@7 -- # uname -s 00:17:09.332 02:33:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.332 02:33:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.332 02:33:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.332 02:33:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.332 02:33:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.332 02:33:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.332 02:33:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.332 02:33:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.332 02:33:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.332 02:33:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.332 02:33:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:17:09.332 02:33:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:17:09.332 02:33:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.332 02:33:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.332 02:33:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:09.332 02:33:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:09.332 02:33:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.332 02:33:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.332 02:33:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.332 02:33:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.332 02:33:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.332 02:33:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.332 02:33:49 -- paths/export.sh@5 -- # export PATH 00:17:09.332 02:33:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.332 02:33:49 -- nvmf/common.sh@46 -- # : 0 00:17:09.332 02:33:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:09.332 02:33:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:09.332 02:33:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:09.332 02:33:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.332 02:33:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.332 02:33:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:09.332 02:33:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:09.332 02:33:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:09.332 02:33:49 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.332 02:33:49 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.332 02:33:49 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:09.332 02:33:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:09.332 02:33:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.332 02:33:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:09.332 02:33:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:09.332 02:33:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:09.332 02:33:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.332 02:33:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.332 02:33:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.332 02:33:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:09.332 02:33:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:09.332 02:33:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:09.332 02:33:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:09.332 02:33:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:09.332 02:33:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:09.332 02:33:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.332 02:33:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.332 02:33:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:09.332 02:33:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:09.332 02:33:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:09.332 02:33:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:09.332 02:33:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:09.332 02:33:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.332 02:33:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:09.332 02:33:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:09.332 02:33:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:09.332 02:33:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:09.332 02:33:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:09.332 02:33:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:09.332 Cannot find device "nvmf_tgt_br" 00:17:09.332 02:33:49 -- nvmf/common.sh@154 -- # true 00:17:09.332 02:33:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.332 Cannot find device "nvmf_tgt_br2" 00:17:09.332 02:33:49 -- nvmf/common.sh@155 -- # true 00:17:09.332 02:33:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:09.332 02:33:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:09.332 Cannot find device "nvmf_tgt_br" 00:17:09.332 02:33:49 -- nvmf/common.sh@157 -- # true 00:17:09.332 02:33:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:09.332 Cannot find device "nvmf_tgt_br2" 00:17:09.332 02:33:49 -- nvmf/common.sh@158 -- # true 00:17:09.332 02:33:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:09.591 02:33:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:09.591 02:33:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.591 02:33:50 -- nvmf/common.sh@161 -- # true 00:17:09.591 02:33:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.591 02:33:50 -- nvmf/common.sh@162 -- # true 00:17:09.591 02:33:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.591 02:33:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.591 02:33:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.591 02:33:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.591 02:33:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.591 02:33:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.591 02:33:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.591 02:33:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:09.591 02:33:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:09.591 02:33:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:09.591 02:33:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:09.591 02:33:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:09.591 02:33:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:09.591 02:33:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.591 02:33:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.591 02:33:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.591 02:33:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:09.591 02:33:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:09.591 02:33:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.591 02:33:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.591 02:33:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.591 02:33:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.849 02:33:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.849 02:33:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:09.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:17:09.849 00:17:09.849 --- 10.0.0.2 ping statistics --- 00:17:09.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.849 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:09.849 02:33:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:09.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:09.849 00:17:09.849 --- 10.0.0.3 ping statistics --- 00:17:09.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.849 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:09.849 02:33:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:09.849 00:17:09.849 --- 10.0.0.1 ping statistics --- 00:17:09.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.849 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:09.849 02:33:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.849 02:33:50 -- nvmf/common.sh@421 -- # return 0 00:17:09.849 02:33:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:09.849 02:33:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.849 02:33:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:09.849 02:33:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:09.849 02:33:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.849 02:33:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:09.849 02:33:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:09.849 02:33:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:09.849 02:33:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:09.849 02:33:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.849 02:33:50 -- common/autotest_common.sh@10 -- # set +x 00:17:09.849 02:33:50 -- nvmf/common.sh@469 -- # nvmfpid=77458 00:17:09.849 02:33:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:09.849 02:33:50 -- nvmf/common.sh@470 -- # waitforlisten 77458 00:17:09.849 02:33:50 -- common/autotest_common.sh@829 -- # '[' -z 77458 ']' 00:17:09.849 02:33:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.849 02:33:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.849 02:33:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.849 02:33:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.849 02:33:50 -- common/autotest_common.sh@10 -- # set +x 00:17:09.849 [2024-11-21 02:33:50.347710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:09.849 [2024-11-21 02:33:50.348286] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:10.109 [2024-11-21 02:33:50.496864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.109 [2024-11-21 02:33:50.600733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:10.109 [2024-11-21 02:33:50.600870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.109 [2024-11-21 02:33:50.600883] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.109 [2024-11-21 02:33:50.600891] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.109 [2024-11-21 02:33:50.601065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:10.109 [2024-11-21 02:33:50.601627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:10.109 [2024-11-21 02:33:50.601784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:10.109 [2024-11-21 02:33:50.601788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.677 02:33:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.677 02:33:51 -- common/autotest_common.sh@862 -- # return 0 00:17:10.677 02:33:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:10.677 02:33:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.677 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:17:10.677 02:33:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.677 02:33:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.677 02:33:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.677 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:17:10.677 [2024-11-21 02:33:51.247694] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.677 02:33:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.677 02:33:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.677 02:33:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.677 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:17:10.677 Malloc0 00:17:10.677 02:33:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.677 02:33:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.677 02:33:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.677 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:17:10.677 02:33:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.677 02:33:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.677 02:33:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.677 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:17:10.677 02:33:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.677 02:33:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.677 02:33:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.677 02:33:51 -- common/autotest_common.sh@10 -- # set +x 00:17:10.677 [2024-11-21 02:33:51.285948] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.677 02:33:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.677 02:33:51 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:10.677 02:33:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:10.677 02:33:51 -- nvmf/common.sh@520 -- # config=() 00:17:10.677 02:33:51 -- nvmf/common.sh@520 -- # local subsystem config 00:17:10.677 02:33:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:10.677 02:33:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:10.677 { 00:17:10.677 "params": { 00:17:10.677 "name": "Nvme$subsystem", 00:17:10.677 "trtype": "$TEST_TRANSPORT", 00:17:10.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.678 "adrfam": "ipv4", 00:17:10.678 "trsvcid": "$NVMF_PORT", 00:17:10.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.678 "hdgst": ${hdgst:-false}, 00:17:10.678 "ddgst": ${ddgst:-false} 00:17:10.678 }, 00:17:10.678 "method": "bdev_nvme_attach_controller" 00:17:10.678 } 00:17:10.678 EOF 00:17:10.678 )") 00:17:10.678 02:33:51 -- nvmf/common.sh@542 -- # cat 00:17:10.678 02:33:51 -- nvmf/common.sh@544 -- # jq . 00:17:10.678 02:33:51 -- nvmf/common.sh@545 -- # IFS=, 00:17:10.678 02:33:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:10.678 "params": { 00:17:10.678 "name": "Nvme1", 00:17:10.678 "trtype": "tcp", 00:17:10.678 "traddr": "10.0.0.2", 00:17:10.678 "adrfam": "ipv4", 00:17:10.678 "trsvcid": "4420", 00:17:10.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.678 "hdgst": false, 00:17:10.678 "ddgst": false 00:17:10.678 }, 00:17:10.678 "method": "bdev_nvme_attach_controller" 00:17:10.678 }' 00:17:10.937 [2024-11-21 02:33:51.349430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:10.937 [2024-11-21 02:33:51.349517] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid77511 ] 00:17:10.937 [2024-11-21 02:33:51.494955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.196 [2024-11-21 02:33:51.623185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.197 [2024-11-21 02:33:51.623302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.197 [2024-11-21 02:33:51.623305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.197 [2024-11-21 02:33:51.810764] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:11.197 [2024-11-21 02:33:51.810820] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:11.197 I/O targets: 00:17:11.197 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:11.197 00:17:11.197 00:17:11.197 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.197 http://cunit.sourceforge.net/ 00:17:11.197 00:17:11.197 00:17:11.197 Suite: bdevio tests on: Nvme1n1 00:17:11.455 Test: blockdev write read block ...passed 00:17:11.455 Test: blockdev write zeroes read block ...passed 00:17:11.455 Test: blockdev write zeroes read no split ...passed 00:17:11.455 Test: blockdev write zeroes read split ...passed 00:17:11.455 Test: blockdev write zeroes read split partial ...passed 00:17:11.455 Test: blockdev reset ...[2024-11-21 02:33:51.944795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:11.455 [2024-11-21 02:33:51.944884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64f1c0 (9): Bad file descriptor 00:17:11.455 [2024-11-21 02:33:51.958727] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:11.455 passed 00:17:11.455 Test: blockdev write read 8 blocks ...passed 00:17:11.455 Test: blockdev write read size > 128k ...passed 00:17:11.455 Test: blockdev write read invalid size ...passed 00:17:11.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:11.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:11.455 Test: blockdev write read max offset ...passed 00:17:11.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:11.455 Test: blockdev writev readv 8 blocks ...passed 00:17:11.455 Test: blockdev writev readv 30 x 1block ...passed 00:17:11.714 Test: blockdev writev readv block ...passed 00:17:11.714 Test: blockdev writev readv size > 128k ...passed 00:17:11.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:11.714 Test: blockdev comparev and writev ...[2024-11-21 02:33:52.134381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.134520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.134621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.134701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.135262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.135380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.135454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.135531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.135957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.136057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.136149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.136223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.136641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.136767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:11.714 [2024-11-21 02:33:52.136860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:11.714 [2024-11-21 02:33:52.136934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:11.714 passed 00:17:11.714 Test: blockdev nvme passthru rw ...passed 00:17:11.714 Test: blockdev nvme passthru vendor specific ...[2024-11-21 02:33:52.221081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.714 [2024-11-21 02:33:52.221202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:11.715 [2024-11-21 02:33:52.221409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.715 [2024-11-21 02:33:52.221511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:11.715 [2024-11-21 02:33:52.221714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.715 [2024-11-21 02:33:52.221827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:11.715 [2024-11-21 02:33:52.222048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:11.715 [2024-11-21 02:33:52.222163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:11.715 passed 00:17:11.715 Test: blockdev nvme admin passthru ...passed 00:17:11.715 Test: blockdev copy ...passed 00:17:11.715 00:17:11.715 Run Summary: Type Total Ran Passed Failed Inactive 00:17:11.715 suites 1 1 n/a 0 0 00:17:11.715 tests 23 23 23 0 0 00:17:11.715 asserts 152 152 152 0 n/a 00:17:11.715 00:17:11.715 Elapsed time = 0.932 seconds 00:17:12.282 02:33:52 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.282 02:33:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.282 02:33:52 -- common/autotest_common.sh@10 -- # set +x 00:17:12.282 02:33:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.282 02:33:52 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:12.282 02:33:52 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:12.282 02:33:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:12.282 02:33:52 -- nvmf/common.sh@116 -- # sync 00:17:12.282 02:33:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:12.282 02:33:52 -- nvmf/common.sh@119 -- # set +e 00:17:12.282 02:33:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:12.282 02:33:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:12.282 rmmod nvme_tcp 00:17:12.282 rmmod nvme_fabrics 00:17:12.282 rmmod nvme_keyring 00:17:12.283 02:33:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:12.283 02:33:52 -- nvmf/common.sh@123 -- # set -e 00:17:12.283 02:33:52 -- nvmf/common.sh@124 -- # return 0 00:17:12.283 02:33:52 -- nvmf/common.sh@477 -- # '[' -n 77458 ']' 00:17:12.283 02:33:52 -- nvmf/common.sh@478 -- # killprocess 77458 00:17:12.283 02:33:52 -- common/autotest_common.sh@936 -- # '[' -z 77458 ']' 00:17:12.283 02:33:52 -- common/autotest_common.sh@940 -- # kill -0 77458 00:17:12.283 02:33:52 -- common/autotest_common.sh@941 -- # uname 00:17:12.283 02:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.283 02:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77458 00:17:12.283 02:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:12.283 02:33:52 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:12.283 killing process with pid 77458 00:17:12.283 02:33:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77458' 00:17:12.283 02:33:52 -- common/autotest_common.sh@955 -- # kill 77458 00:17:12.283 02:33:52 -- common/autotest_common.sh@960 -- # wait 77458 00:17:12.851 02:33:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:12.851 02:33:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:12.851 02:33:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:12.851 02:33:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.851 02:33:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:12.851 02:33:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.851 02:33:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.851 02:33:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.851 02:33:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:12.851 ************************************ 00:17:12.851 END TEST nvmf_bdevio_no_huge 00:17:12.851 ************************************ 00:17:12.851 00:17:12.851 real 0m3.647s 00:17:12.851 user 0m12.828s 00:17:12.851 sys 0m1.364s 00:17:12.851 02:33:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:12.851 02:33:53 -- common/autotest_common.sh@10 -- # set +x 00:17:12.851 02:33:53 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:12.851 02:33:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:12.851 02:33:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:12.851 02:33:53 -- common/autotest_common.sh@10 -- # set +x 00:17:12.851 ************************************ 00:17:12.851 START TEST nvmf_tls 00:17:12.851 ************************************ 00:17:12.851 02:33:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:12.851 * Looking for test storage... 00:17:12.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:12.851 02:33:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:12.851 02:33:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:12.851 02:33:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:13.111 02:33:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:13.111 02:33:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:13.111 02:33:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:13.111 02:33:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:13.111 02:33:53 -- scripts/common.sh@335 -- # IFS=.-: 00:17:13.111 02:33:53 -- scripts/common.sh@335 -- # read -ra ver1 00:17:13.111 02:33:53 -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.111 02:33:53 -- scripts/common.sh@336 -- # read -ra ver2 00:17:13.111 02:33:53 -- scripts/common.sh@337 -- # local 'op=<' 00:17:13.111 02:33:53 -- scripts/common.sh@339 -- # ver1_l=2 00:17:13.111 02:33:53 -- scripts/common.sh@340 -- # ver2_l=1 00:17:13.111 02:33:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:13.111 02:33:53 -- scripts/common.sh@343 -- # case "$op" in 00:17:13.111 02:33:53 -- scripts/common.sh@344 -- # : 1 00:17:13.111 02:33:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:13.111 02:33:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.111 02:33:53 -- scripts/common.sh@364 -- # decimal 1 00:17:13.111 02:33:53 -- scripts/common.sh@352 -- # local d=1 00:17:13.111 02:33:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.111 02:33:53 -- scripts/common.sh@354 -- # echo 1 00:17:13.111 02:33:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:13.111 02:33:53 -- scripts/common.sh@365 -- # decimal 2 00:17:13.111 02:33:53 -- scripts/common.sh@352 -- # local d=2 00:17:13.111 02:33:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.111 02:33:53 -- scripts/common.sh@354 -- # echo 2 00:17:13.111 02:33:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:13.111 02:33:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:13.111 02:33:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:13.111 02:33:53 -- scripts/common.sh@367 -- # return 0 00:17:13.111 02:33:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.111 02:33:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.111 --rc genhtml_branch_coverage=1 00:17:13.111 --rc genhtml_function_coverage=1 00:17:13.111 --rc genhtml_legend=1 00:17:13.111 --rc geninfo_all_blocks=1 00:17:13.111 --rc geninfo_unexecuted_blocks=1 00:17:13.111 00:17:13.111 ' 00:17:13.111 02:33:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.111 --rc genhtml_branch_coverage=1 00:17:13.111 --rc genhtml_function_coverage=1 00:17:13.111 --rc genhtml_legend=1 00:17:13.111 --rc geninfo_all_blocks=1 00:17:13.111 --rc geninfo_unexecuted_blocks=1 00:17:13.111 00:17:13.111 ' 00:17:13.111 02:33:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.111 --rc genhtml_branch_coverage=1 00:17:13.111 --rc genhtml_function_coverage=1 00:17:13.111 --rc genhtml_legend=1 00:17:13.111 --rc geninfo_all_blocks=1 00:17:13.111 --rc geninfo_unexecuted_blocks=1 00:17:13.111 00:17:13.111 ' 00:17:13.111 02:33:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:13.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.111 --rc genhtml_branch_coverage=1 00:17:13.111 --rc genhtml_function_coverage=1 00:17:13.111 --rc genhtml_legend=1 00:17:13.111 --rc geninfo_all_blocks=1 00:17:13.111 --rc geninfo_unexecuted_blocks=1 00:17:13.111 00:17:13.111 ' 00:17:13.111 02:33:53 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.111 02:33:53 -- nvmf/common.sh@7 -- # uname -s 00:17:13.111 02:33:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.111 02:33:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.111 02:33:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.111 02:33:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.111 02:33:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.111 02:33:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.111 02:33:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.111 02:33:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.111 02:33:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.111 02:33:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.111 02:33:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:17:13.111 02:33:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:17:13.111 02:33:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.111 02:33:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.111 02:33:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.111 02:33:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.111 02:33:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.111 02:33:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.111 02:33:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.111 02:33:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.111 02:33:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.112 02:33:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.112 02:33:53 -- paths/export.sh@5 -- # export PATH 00:17:13.112 02:33:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.112 02:33:53 -- nvmf/common.sh@46 -- # : 0 00:17:13.112 02:33:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:13.112 02:33:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:13.112 02:33:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:13.112 02:33:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.112 02:33:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.112 02:33:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:13.112 02:33:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:13.112 02:33:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:13.112 02:33:53 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.112 02:33:53 -- target/tls.sh@71 -- # nvmftestinit 00:17:13.112 02:33:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:13.112 02:33:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.112 02:33:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:13.112 02:33:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:13.112 02:33:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:13.112 02:33:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.112 02:33:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.112 02:33:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.112 02:33:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:13.112 02:33:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:13.112 02:33:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:13.112 02:33:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:13.112 02:33:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:13.112 02:33:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:13.112 02:33:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.112 02:33:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.112 02:33:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:13.112 02:33:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:13.112 02:33:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.112 02:33:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.112 02:33:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.112 02:33:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.112 02:33:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.112 02:33:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.112 02:33:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.112 02:33:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.112 02:33:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:13.112 02:33:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:13.112 Cannot find device "nvmf_tgt_br" 00:17:13.112 02:33:53 -- nvmf/common.sh@154 -- # true 00:17:13.112 02:33:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.112 Cannot find device "nvmf_tgt_br2" 00:17:13.112 02:33:53 -- nvmf/common.sh@155 -- # true 00:17:13.112 02:33:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:13.112 02:33:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:13.112 Cannot find device "nvmf_tgt_br" 00:17:13.112 02:33:53 -- nvmf/common.sh@157 -- # true 00:17:13.112 02:33:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:13.112 Cannot find device "nvmf_tgt_br2" 00:17:13.112 02:33:53 -- nvmf/common.sh@158 -- # true 00:17:13.112 02:33:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:13.112 02:33:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:13.112 02:33:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.112 02:33:53 -- nvmf/common.sh@161 -- # true 00:17:13.112 02:33:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.112 02:33:53 -- nvmf/common.sh@162 -- # true 00:17:13.112 02:33:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.112 02:33:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.112 02:33:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.112 02:33:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.372 02:33:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.372 02:33:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.372 02:33:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.372 02:33:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:13.372 02:33:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:13.372 02:33:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:13.372 02:33:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:13.372 02:33:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:13.372 02:33:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:13.372 02:33:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.372 02:33:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.372 02:33:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.372 02:33:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:13.372 02:33:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:13.372 02:33:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.372 02:33:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.372 02:33:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.372 02:33:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.372 02:33:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.372 02:33:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:13.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:13.372 00:17:13.372 --- 10.0.0.2 ping statistics --- 00:17:13.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.372 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:13.372 02:33:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:13.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:17:13.372 00:17:13.372 --- 10.0.0.3 ping statistics --- 00:17:13.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.372 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:13.372 02:33:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:17:13.372 00:17:13.372 --- 10.0.0.1 ping statistics --- 00:17:13.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.372 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:13.372 02:33:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.372 02:33:53 -- nvmf/common.sh@421 -- # return 0 00:17:13.372 02:33:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:13.372 02:33:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.372 02:33:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:13.372 02:33:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:13.372 02:33:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.372 02:33:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:13.372 02:33:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:13.372 02:33:53 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:13.372 02:33:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:13.372 02:33:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.372 02:33:53 -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 02:33:53 -- nvmf/common.sh@469 -- # nvmfpid=77704 00:17:13.372 02:33:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:13.373 02:33:53 -- nvmf/common.sh@470 -- # waitforlisten 77704 00:17:13.373 02:33:53 -- common/autotest_common.sh@829 -- # '[' -z 77704 ']' 00:17:13.373 02:33:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.373 02:33:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.373 02:33:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.373 02:33:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.373 02:33:53 -- common/autotest_common.sh@10 -- # set +x 00:17:13.632 [2024-11-21 02:33:54.034211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:13.632 [2024-11-21 02:33:54.034308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.632 [2024-11-21 02:33:54.171505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.632 [2024-11-21 02:33:54.271091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:13.632 [2024-11-21 02:33:54.271260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.632 [2024-11-21 02:33:54.271277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.632 [2024-11-21 02:33:54.271287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.632 [2024-11-21 02:33:54.271319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.567 02:33:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.567 02:33:55 -- common/autotest_common.sh@862 -- # return 0 00:17:14.567 02:33:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:14.567 02:33:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.567 02:33:55 -- common/autotest_common.sh@10 -- # set +x 00:17:14.567 02:33:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.567 02:33:55 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:14.567 02:33:55 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:14.825 true 00:17:14.825 02:33:55 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.825 02:33:55 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:15.083 02:33:55 -- target/tls.sh@82 -- # version=0 00:17:15.083 02:33:55 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:15.083 02:33:55 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:15.340 02:33:55 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.340 02:33:55 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:15.597 02:33:56 -- target/tls.sh@90 -- # version=13 00:17:15.597 02:33:56 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:15.597 02:33:56 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:15.854 02:33:56 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.854 02:33:56 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:16.112 02:33:56 -- target/tls.sh@98 -- # version=7 00:17:16.112 02:33:56 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:16.112 02:33:56 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:16.112 02:33:56 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:16.370 02:33:56 -- target/tls.sh@105 -- # ktls=false 00:17:16.370 02:33:56 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:16.370 02:33:56 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:16.628 02:33:57 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:16.628 02:33:57 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:16.886 02:33:57 -- target/tls.sh@113 -- # ktls=true 00:17:16.886 02:33:57 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:16.886 02:33:57 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:17.144 02:33:57 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:17.144 02:33:57 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:17.402 02:33:57 -- target/tls.sh@121 -- # ktls=false 00:17:17.402 02:33:57 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:17.402 02:33:57 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:17.402 02:33:57 -- target/tls.sh@49 -- # local key hash crc 00:17:17.402 02:33:57 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:17.402 02:33:57 -- target/tls.sh@51 -- # hash=01 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # tail -c8 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # gzip -1 -c 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # head -c 4 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # crc='p$H�' 00:17:17.402 02:33:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:17.402 02:33:57 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:17.402 02:33:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:17.402 02:33:57 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:17.402 02:33:57 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:17.402 02:33:57 -- target/tls.sh@49 -- # local key hash crc 00:17:17.402 02:33:57 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:17.402 02:33:57 -- target/tls.sh@51 -- # hash=01 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # tail -c8 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # gzip -1 -c 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # head -c 4 00:17:17.402 02:33:57 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:17.402 02:33:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:17.402 02:33:57 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:17.402 02:33:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:17.402 02:33:57 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:17.402 02:33:57 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.402 02:33:57 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:17.402 02:33:57 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:17.402 02:33:57 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:17.402 02:33:57 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.402 02:33:57 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:17.402 02:33:57 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:17.660 02:33:58 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:17.918 02:33:58 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.918 02:33:58 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:17.918 02:33:58 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:18.177 [2024-11-21 02:33:58.763900] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.177 02:33:58 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:18.436 02:33:59 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:18.703 [2024-11-21 02:33:59.251941] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:18.703 [2024-11-21 02:33:59.252231] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.703 02:33:59 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:18.967 malloc0 00:17:18.967 02:33:59 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:19.225 02:33:59 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:19.483 02:34:00 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:31.684 Initializing NVMe Controllers 00:17:31.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:31.684 Initialization complete. Launching workers. 00:17:31.684 ======================================================== 00:17:31.684 Latency(us) 00:17:31.684 Device Information : IOPS MiB/s Average min max 00:17:31.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11679.80 45.62 5480.48 1676.06 11749.17 00:17:31.684 ======================================================== 00:17:31.684 Total : 11679.80 45.62 5480.48 1676.06 11749.17 00:17:31.684 00:17:31.684 02:34:10 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:31.684 02:34:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.684 02:34:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.684 02:34:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.684 02:34:10 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:31.684 02:34:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.684 02:34:10 -- target/tls.sh@28 -- # bdevperf_pid=78073 00:17:31.684 02:34:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.684 02:34:10 -- target/tls.sh@31 -- # waitforlisten 78073 /var/tmp/bdevperf.sock 00:17:31.684 02:34:10 -- common/autotest_common.sh@829 -- # '[' -z 78073 ']' 00:17:31.684 02:34:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.684 02:34:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.684 02:34:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.684 02:34:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.684 02:34:10 -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 02:34:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.684 [2024-11-21 02:34:10.276924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:31.684 [2024-11-21 02:34:10.277032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78073 ] 00:17:31.684 [2024-11-21 02:34:10.418391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.684 [2024-11-21 02:34:10.515274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.684 02:34:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.684 02:34:11 -- common/autotest_common.sh@862 -- # return 0 00:17:31.684 02:34:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:31.684 [2024-11-21 02:34:11.465704] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.684 TLSTESTn1 00:17:31.684 02:34:11 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:31.684 Running I/O for 10 seconds... 00:17:41.652 00:17:41.652 Latency(us) 00:17:41.652 [2024-11-21T02:34:22.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.652 [2024-11-21T02:34:22.299Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.652 Verification LBA range: start 0x0 length 0x2000 00:17:41.652 TLSTESTn1 : 10.01 5937.62 23.19 0.00 0.00 21529.66 2055.45 23950.43 00:17:41.652 [2024-11-21T02:34:22.299Z] =================================================================================================================== 00:17:41.652 [2024-11-21T02:34:22.299Z] Total : 5937.62 23.19 0.00 0.00 21529.66 2055.45 23950.43 00:17:41.652 0 00:17:41.652 02:34:21 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.652 02:34:21 -- target/tls.sh@45 -- # killprocess 78073 00:17:41.652 02:34:21 -- common/autotest_common.sh@936 -- # '[' -z 78073 ']' 00:17:41.652 02:34:21 -- common/autotest_common.sh@940 -- # kill -0 78073 00:17:41.652 02:34:21 -- common/autotest_common.sh@941 -- # uname 00:17:41.652 02:34:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.652 02:34:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78073 00:17:41.652 killing process with pid 78073 00:17:41.652 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.652 00:17:41.652 Latency(us) 00:17:41.652 [2024-11-21T02:34:22.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.652 [2024-11-21T02:34:22.299Z] =================================================================================================================== 00:17:41.652 [2024-11-21T02:34:22.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.652 02:34:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.652 02:34:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.652 02:34:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78073' 00:17:41.652 02:34:21 -- common/autotest_common.sh@955 -- # kill 78073 00:17:41.652 02:34:21 -- common/autotest_common.sh@960 -- # wait 78073 00:17:41.652 02:34:22 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:41.652 02:34:22 -- common/autotest_common.sh@650 -- # local es=0 00:17:41.652 02:34:22 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:41.652 02:34:22 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:41.652 02:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.652 02:34:22 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:41.652 02:34:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.652 02:34:22 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:41.652 02:34:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.652 02:34:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.652 02:34:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:41.652 02:34:22 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:41.653 02:34:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.653 02:34:22 -- target/tls.sh@28 -- # bdevperf_pid=78219 00:17:41.653 02:34:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.653 02:34:22 -- target/tls.sh@31 -- # waitforlisten 78219 /var/tmp/bdevperf.sock 00:17:41.653 02:34:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.653 02:34:22 -- common/autotest_common.sh@829 -- # '[' -z 78219 ']' 00:17:41.653 02:34:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.653 02:34:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.653 02:34:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.653 02:34:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.653 02:34:22 -- common/autotest_common.sh@10 -- # set +x 00:17:41.653 [2024-11-21 02:34:22.054942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:41.653 [2024-11-21 02:34:22.055387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78219 ] 00:17:41.653 [2024-11-21 02:34:22.184705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.653 [2024-11-21 02:34:22.270573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.587 02:34:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.587 02:34:23 -- common/autotest_common.sh@862 -- # return 0 00:17:42.587 02:34:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:42.587 [2024-11-21 02:34:23.197166] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.587 [2024-11-21 02:34:23.206052] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:42.587 [2024-11-21 02:34:23.206494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeca3d0 (107): Transport endpoint is not connected 00:17:42.587 [2024-11-21 02:34:23.207479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeca3d0 (9): Bad file descriptor 00:17:42.587 [2024-11-21 02:34:23.208474] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:42.587 [2024-11-21 02:34:23.208499] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:42.587 [2024-11-21 02:34:23.208509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:42.587 2024/11/21 02:34:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:42.587 request: 00:17:42.587 { 00:17:42.587 "method": "bdev_nvme_attach_controller", 00:17:42.587 "params": { 00:17:42.587 "name": "TLSTEST", 00:17:42.587 "trtype": "tcp", 00:17:42.587 "traddr": "10.0.0.2", 00:17:42.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.587 "adrfam": "ipv4", 00:17:42.587 "trsvcid": "4420", 00:17:42.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.587 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:42.587 } 00:17:42.587 } 00:17:42.587 Got JSON-RPC error response 00:17:42.587 GoRPCClient: error on JSON-RPC call 00:17:42.587 02:34:23 -- target/tls.sh@36 -- # killprocess 78219 00:17:42.587 02:34:23 -- common/autotest_common.sh@936 -- # '[' -z 78219 ']' 00:17:42.587 02:34:23 -- common/autotest_common.sh@940 -- # kill -0 78219 00:17:42.587 02:34:23 -- common/autotest_common.sh@941 -- # uname 00:17:42.587 02:34:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.587 02:34:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78219 00:17:42.846 killing process with pid 78219 00:17:42.846 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.846 00:17:42.846 Latency(us) 00:17:42.846 [2024-11-21T02:34:23.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.846 [2024-11-21T02:34:23.493Z] =================================================================================================================== 00:17:42.846 [2024-11-21T02:34:23.493Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.846 02:34:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:42.846 02:34:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:42.846 02:34:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78219' 00:17:42.846 02:34:23 -- common/autotest_common.sh@955 -- # kill 78219 00:17:42.846 02:34:23 -- common/autotest_common.sh@960 -- # wait 78219 00:17:43.104 02:34:23 -- target/tls.sh@37 -- # return 1 00:17:43.104 02:34:23 -- common/autotest_common.sh@653 -- # es=1 00:17:43.104 02:34:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.104 02:34:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.104 02:34:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.104 02:34:23 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:43.104 02:34:23 -- common/autotest_common.sh@650 -- # local es=0 00:17:43.104 02:34:23 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:43.104 02:34:23 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:43.104 02:34:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.104 02:34:23 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:43.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:43.104 02:34:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.104 02:34:23 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:43.104 02:34:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:43.104 02:34:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:43.104 02:34:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:43.104 02:34:23 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:43.104 02:34:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.104 02:34:23 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:43.104 02:34:23 -- target/tls.sh@28 -- # bdevperf_pid=78265 00:17:43.104 02:34:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:43.104 02:34:23 -- target/tls.sh@31 -- # waitforlisten 78265 /var/tmp/bdevperf.sock 00:17:43.104 02:34:23 -- common/autotest_common.sh@829 -- # '[' -z 78265 ']' 00:17:43.104 02:34:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:43.104 02:34:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.104 02:34:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:43.104 02:34:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.104 02:34:23 -- common/autotest_common.sh@10 -- # set +x 00:17:43.104 [2024-11-21 02:34:23.592711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.104 [2024-11-21 02:34:23.592947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78265 ] 00:17:43.104 [2024-11-21 02:34:23.723695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.362 [2024-11-21 02:34:23.813661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.928 02:34:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.928 02:34:24 -- common/autotest_common.sh@862 -- # return 0 00:17:43.928 02:34:24 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:44.185 [2024-11-21 02:34:24.800535] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.185 [2024-11-21 02:34:24.806155] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:44.185 [2024-11-21 02:34:24.806236] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:44.186 [2024-11-21 02:34:24.806306] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:44.186 [2024-11-21 02:34:24.807061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf013d0 (107): Transport endpoint is not connected 00:17:44.186 [2024-11-21 02:34:24.808047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf013d0 (9): Bad file descriptor 00:17:44.186 [2024-11-21 02:34:24.809042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:44.186 [2024-11-21 02:34:24.809069] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:44.186 [2024-11-21 02:34:24.809082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:44.186 2024/11/21 02:34:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:44.186 request: 00:17:44.186 { 00:17:44.186 "method": "bdev_nvme_attach_controller", 00:17:44.186 "params": { 00:17:44.186 "name": "TLSTEST", 00:17:44.186 "trtype": "tcp", 00:17:44.186 "traddr": "10.0.0.2", 00:17:44.186 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:44.186 "adrfam": "ipv4", 00:17:44.186 "trsvcid": "4420", 00:17:44.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.186 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:44.186 } 00:17:44.186 } 00:17:44.186 Got JSON-RPC error response 00:17:44.186 GoRPCClient: error on JSON-RPC call 00:17:44.444 02:34:24 -- target/tls.sh@36 -- # killprocess 78265 00:17:44.444 02:34:24 -- common/autotest_common.sh@936 -- # '[' -z 78265 ']' 00:17:44.444 02:34:24 -- common/autotest_common.sh@940 -- # kill -0 78265 00:17:44.444 02:34:24 -- common/autotest_common.sh@941 -- # uname 00:17:44.444 02:34:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.444 02:34:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78265 00:17:44.444 killing process with pid 78265 00:17:44.444 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.444 00:17:44.444 Latency(us) 00:17:44.444 [2024-11-21T02:34:25.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.444 [2024-11-21T02:34:25.091Z] =================================================================================================================== 00:17:44.444 [2024-11-21T02:34:25.091Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.444 02:34:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:44.444 02:34:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:44.444 02:34:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78265' 00:17:44.444 02:34:24 -- common/autotest_common.sh@955 -- # kill 78265 00:17:44.444 02:34:24 -- common/autotest_common.sh@960 -- # wait 78265 00:17:44.702 02:34:25 -- target/tls.sh@37 -- # return 1 00:17:44.702 02:34:25 -- common/autotest_common.sh@653 -- # es=1 00:17:44.702 02:34:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.702 02:34:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.702 02:34:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.702 02:34:25 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:44.702 02:34:25 -- common/autotest_common.sh@650 -- # local es=0 00:17:44.702 02:34:25 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:44.702 02:34:25 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:44.702 02:34:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.702 02:34:25 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:44.702 02:34:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.702 02:34:25 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:44.702 02:34:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.702 02:34:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:44.702 02:34:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.703 02:34:25 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:44.703 02:34:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.703 02:34:25 -- target/tls.sh@28 -- # bdevperf_pid=78316 00:17:44.703 02:34:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.703 02:34:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.703 02:34:25 -- target/tls.sh@31 -- # waitforlisten 78316 /var/tmp/bdevperf.sock 00:17:44.703 02:34:25 -- common/autotest_common.sh@829 -- # '[' -z 78316 ']' 00:17:44.703 02:34:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.703 02:34:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.703 02:34:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.703 02:34:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.703 02:34:25 -- common/autotest_common.sh@10 -- # set +x 00:17:44.703 [2024-11-21 02:34:25.205730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.703 [2024-11-21 02:34:25.206023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78316 ] 00:17:44.703 [2024-11-21 02:34:25.338448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.960 [2024-11-21 02:34:25.411967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.893 02:34:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.893 02:34:26 -- common/autotest_common.sh@862 -- # return 0 00:17:45.893 02:34:26 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:45.893 [2024-11-21 02:34:26.413811] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.893 [2024-11-21 02:34:26.418402] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:45.893 [2024-11-21 02:34:26.418437] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:45.893 [2024-11-21 02:34:26.418512] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:45.893 [2024-11-21 02:34:26.419147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114b3d0 (107): Transport endpoint is not connected 00:17:45.893 [2024-11-21 02:34:26.420111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114b3d0 (9): Bad file descriptor 00:17:45.893 [2024-11-21 02:34:26.421107] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:45.893 [2024-11-21 02:34:26.421133] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:45.893 [2024-11-21 02:34:26.421144] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:45.893 2024/11/21 02:34:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:45.893 request: 00:17:45.893 { 00:17:45.893 "method": "bdev_nvme_attach_controller", 00:17:45.893 "params": { 00:17:45.894 "name": "TLSTEST", 00:17:45.894 "trtype": "tcp", 00:17:45.894 "traddr": "10.0.0.2", 00:17:45.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:45.894 "adrfam": "ipv4", 00:17:45.894 "trsvcid": "4420", 00:17:45.894 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:45.894 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:45.894 } 00:17:45.894 } 00:17:45.894 Got JSON-RPC error response 00:17:45.894 GoRPCClient: error on JSON-RPC call 00:17:45.894 02:34:26 -- target/tls.sh@36 -- # killprocess 78316 00:17:45.894 02:34:26 -- common/autotest_common.sh@936 -- # '[' -z 78316 ']' 00:17:45.894 02:34:26 -- common/autotest_common.sh@940 -- # kill -0 78316 00:17:45.894 02:34:26 -- common/autotest_common.sh@941 -- # uname 00:17:45.894 02:34:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.894 02:34:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78316 00:17:45.894 killing process with pid 78316 00:17:45.894 Received shutdown signal, test time was about 10.000000 seconds 00:17:45.894 00:17:45.894 Latency(us) 00:17:45.894 [2024-11-21T02:34:26.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.894 [2024-11-21T02:34:26.541Z] =================================================================================================================== 00:17:45.894 [2024-11-21T02:34:26.541Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:45.894 02:34:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:45.894 02:34:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:45.894 02:34:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78316' 00:17:45.894 02:34:26 -- common/autotest_common.sh@955 -- # kill 78316 00:17:45.894 02:34:26 -- common/autotest_common.sh@960 -- # wait 78316 00:17:46.151 02:34:26 -- target/tls.sh@37 -- # return 1 00:17:46.151 02:34:26 -- common/autotest_common.sh@653 -- # es=1 00:17:46.151 02:34:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.151 02:34:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.151 02:34:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.151 02:34:26 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:46.151 02:34:26 -- common/autotest_common.sh@650 -- # local es=0 00:17:46.151 02:34:26 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:46.151 02:34:26 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:46.151 02:34:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.151 02:34:26 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:46.151 02:34:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.151 02:34:26 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:46.151 02:34:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:46.151 02:34:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:46.151 02:34:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:46.151 02:34:26 -- target/tls.sh@23 -- # psk= 00:17:46.151 02:34:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.151 02:34:26 -- target/tls.sh@28 -- # bdevperf_pid=78356 00:17:46.151 02:34:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.151 02:34:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.151 02:34:26 -- target/tls.sh@31 -- # waitforlisten 78356 /var/tmp/bdevperf.sock 00:17:46.151 02:34:26 -- common/autotest_common.sh@829 -- # '[' -z 78356 ']' 00:17:46.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.151 02:34:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.151 02:34:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.151 02:34:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.151 02:34:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.151 02:34:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.409 [2024-11-21 02:34:26.815377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.409 [2024-11-21 02:34:26.815486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78356 ] 00:17:46.409 [2024-11-21 02:34:26.953872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.409 [2024-11-21 02:34:27.030039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.359 02:34:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.359 02:34:27 -- common/autotest_common.sh@862 -- # return 0 00:17:47.360 02:34:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:47.360 [2024-11-21 02:34:27.886462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.360 [2024-11-21 02:34:27.888200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66ddc0 (9): Bad file descriptor 00:17:47.360 [2024-11-21 02:34:27.889190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:47.360 [2024-11-21 02:34:27.889214] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.360 [2024-11-21 02:34:27.889225] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.360 2024/11/21 02:34:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:47.360 request: 00:17:47.360 { 00:17:47.360 "method": "bdev_nvme_attach_controller", 00:17:47.360 "params": { 00:17:47.360 "name": "TLSTEST", 00:17:47.360 "trtype": "tcp", 00:17:47.360 "traddr": "10.0.0.2", 00:17:47.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.360 "adrfam": "ipv4", 00:17:47.360 "trsvcid": "4420", 00:17:47.360 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:47.360 } 00:17:47.360 } 00:17:47.360 Got JSON-RPC error response 00:17:47.360 GoRPCClient: error on JSON-RPC call 00:17:47.360 02:34:27 -- target/tls.sh@36 -- # killprocess 78356 00:17:47.360 02:34:27 -- common/autotest_common.sh@936 -- # '[' -z 78356 ']' 00:17:47.360 02:34:27 -- common/autotest_common.sh@940 -- # kill -0 78356 00:17:47.360 02:34:27 -- common/autotest_common.sh@941 -- # uname 00:17:47.360 02:34:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.360 02:34:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78356 00:17:47.360 killing process with pid 78356 00:17:47.360 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.360 00:17:47.360 Latency(us) 00:17:47.360 [2024-11-21T02:34:28.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.360 [2024-11-21T02:34:28.007Z] =================================================================================================================== 00:17:47.360 [2024-11-21T02:34:28.007Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.360 02:34:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:47.360 02:34:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:47.360 02:34:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78356' 00:17:47.360 02:34:27 -- common/autotest_common.sh@955 -- # kill 78356 00:17:47.360 02:34:27 -- common/autotest_common.sh@960 -- # wait 78356 00:17:47.631 02:34:28 -- target/tls.sh@37 -- # return 1 00:17:47.631 02:34:28 -- common/autotest_common.sh@653 -- # es=1 00:17:47.631 02:34:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.631 02:34:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.631 02:34:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.631 02:34:28 -- target/tls.sh@167 -- # killprocess 77704 00:17:47.631 02:34:28 -- common/autotest_common.sh@936 -- # '[' -z 77704 ']' 00:17:47.631 02:34:28 -- common/autotest_common.sh@940 -- # kill -0 77704 00:17:47.631 02:34:28 -- common/autotest_common.sh@941 -- # uname 00:17:47.631 02:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.631 02:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77704 00:17:47.631 killing process with pid 77704 00:17:47.631 02:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:47.631 02:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:47.631 02:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77704' 00:17:47.631 02:34:28 -- common/autotest_common.sh@955 -- # kill 77704 00:17:47.631 02:34:28 -- common/autotest_common.sh@960 -- # wait 77704 00:17:47.905 02:34:28 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:47.905 02:34:28 -- target/tls.sh@49 -- # local key hash crc 00:17:47.905 02:34:28 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:47.905 02:34:28 -- target/tls.sh@51 -- # hash=02 00:17:47.905 02:34:28 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:47.905 02:34:28 -- target/tls.sh@52 -- # gzip -1 -c 00:17:47.905 02:34:28 -- target/tls.sh@52 -- # tail -c8 00:17:47.905 02:34:28 -- target/tls.sh@52 -- # head -c 4 00:17:47.905 02:34:28 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:47.905 02:34:28 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:47.905 02:34:28 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:47.906 02:34:28 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:47.906 02:34:28 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:47.906 02:34:28 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.906 02:34:28 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:47.906 02:34:28 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.906 02:34:28 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:47.906 02:34:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:47.906 02:34:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.906 02:34:28 -- common/autotest_common.sh@10 -- # set +x 00:17:47.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.906 02:34:28 -- nvmf/common.sh@469 -- # nvmfpid=78422 00:17:47.906 02:34:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.906 02:34:28 -- nvmf/common.sh@470 -- # waitforlisten 78422 00:17:47.906 02:34:28 -- common/autotest_common.sh@829 -- # '[' -z 78422 ']' 00:17:47.906 02:34:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.906 02:34:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.906 02:34:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.906 02:34:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.906 02:34:28 -- common/autotest_common.sh@10 -- # set +x 00:17:48.164 [2024-11-21 02:34:28.557493] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:48.164 [2024-11-21 02:34:28.557589] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.164 [2024-11-21 02:34:28.686881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.164 [2024-11-21 02:34:28.762083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.164 [2024-11-21 02:34:28.762240] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.164 [2024-11-21 02:34:28.762252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.164 [2024-11-21 02:34:28.762260] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.164 [2024-11-21 02:34:28.762293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.098 02:34:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.098 02:34:29 -- common/autotest_common.sh@862 -- # return 0 00:17:49.098 02:34:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.098 02:34:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.098 02:34:29 -- common/autotest_common.sh@10 -- # set +x 00:17:49.098 02:34:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.098 02:34:29 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.098 02:34:29 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:49.098 02:34:29 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.355 [2024-11-21 02:34:29.855256] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.355 02:34:29 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:49.613 02:34:30 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:49.871 [2024-11-21 02:34:30.379343] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.871 [2024-11-21 02:34:30.379624] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.871 02:34:30 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:50.129 malloc0 00:17:50.129 02:34:30 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.387 02:34:30 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:50.645 02:34:31 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:50.645 02:34:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:50.645 02:34:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:50.645 02:34:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:50.645 02:34:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:50.645 02:34:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.645 02:34:31 -- target/tls.sh@28 -- # bdevperf_pid=78520 00:17:50.645 02:34:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:50.645 02:34:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.645 02:34:31 -- target/tls.sh@31 -- # waitforlisten 78520 /var/tmp/bdevperf.sock 00:17:50.645 02:34:31 -- common/autotest_common.sh@829 -- # '[' -z 78520 ']' 00:17:50.645 02:34:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.645 02:34:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.645 02:34:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.645 02:34:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.645 02:34:31 -- common/autotest_common.sh@10 -- # set +x 00:17:50.645 [2024-11-21 02:34:31.199766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:50.645 [2024-11-21 02:34:31.200367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78520 ] 00:17:50.903 [2024-11-21 02:34:31.339755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.903 [2024-11-21 02:34:31.456000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.837 02:34:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.837 02:34:32 -- common/autotest_common.sh@862 -- # return 0 00:17:51.837 02:34:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:51.837 [2024-11-21 02:34:32.333542] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.837 TLSTESTn1 00:17:51.837 02:34:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:52.095 Running I/O for 10 seconds... 00:18:02.061 00:18:02.061 Latency(us) 00:18:02.061 [2024-11-21T02:34:42.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.061 [2024-11-21T02:34:42.708Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.062 Verification LBA range: start 0x0 length 0x2000 00:18:02.062 TLSTESTn1 : 10.01 5889.93 23.01 0.00 0.00 21701.53 3813.00 21805.61 00:18:02.062 [2024-11-21T02:34:42.709Z] =================================================================================================================== 00:18:02.062 [2024-11-21T02:34:42.709Z] Total : 5889.93 23.01 0.00 0.00 21701.53 3813.00 21805.61 00:18:02.062 0 00:18:02.062 02:34:42 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.062 02:34:42 -- target/tls.sh@45 -- # killprocess 78520 00:18:02.062 02:34:42 -- common/autotest_common.sh@936 -- # '[' -z 78520 ']' 00:18:02.062 02:34:42 -- common/autotest_common.sh@940 -- # kill -0 78520 00:18:02.062 02:34:42 -- common/autotest_common.sh@941 -- # uname 00:18:02.062 02:34:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.062 02:34:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78520 00:18:02.062 killing process with pid 78520 00:18:02.062 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.062 00:18:02.062 Latency(us) 00:18:02.062 [2024-11-21T02:34:42.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.062 [2024-11-21T02:34:42.709Z] =================================================================================================================== 00:18:02.062 [2024-11-21T02:34:42.709Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.062 02:34:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:02.062 02:34:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:02.062 02:34:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78520' 00:18:02.062 02:34:42 -- common/autotest_common.sh@955 -- # kill 78520 00:18:02.062 02:34:42 -- common/autotest_common.sh@960 -- # wait 78520 00:18:02.320 02:34:42 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.320 02:34:42 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.320 02:34:42 -- common/autotest_common.sh@650 -- # local es=0 00:18:02.320 02:34:42 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.320 02:34:42 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:02.320 02:34:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.320 02:34:42 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:02.320 02:34:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.320 02:34:42 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.320 02:34:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.320 02:34:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.320 02:34:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.320 02:34:42 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:18:02.320 02:34:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.320 02:34:42 -- target/tls.sh@28 -- # bdevperf_pid=78673 00:18:02.320 02:34:42 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.320 02:34:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.320 02:34:42 -- target/tls.sh@31 -- # waitforlisten 78673 /var/tmp/bdevperf.sock 00:18:02.320 02:34:42 -- common/autotest_common.sh@829 -- # '[' -z 78673 ']' 00:18:02.320 02:34:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.320 02:34:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.320 02:34:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.320 02:34:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.320 02:34:42 -- common/autotest_common.sh@10 -- # set +x 00:18:02.320 [2024-11-21 02:34:42.935674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:02.320 [2024-11-21 02:34:42.935988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78673 ] 00:18:02.578 [2024-11-21 02:34:43.071811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.578 [2024-11-21 02:34:43.154910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.515 02:34:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.515 02:34:43 -- common/autotest_common.sh@862 -- # return 0 00:18:03.515 02:34:43 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:03.515 [2024-11-21 02:34:44.088611] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.515 [2024-11-21 02:34:44.088663] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:03.515 2024/11/21 02:34:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:03.515 request: 00:18:03.515 { 00:18:03.515 "method": "bdev_nvme_attach_controller", 00:18:03.515 "params": { 00:18:03.515 "name": "TLSTEST", 00:18:03.515 "trtype": "tcp", 00:18:03.515 "traddr": "10.0.0.2", 00:18:03.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.515 "adrfam": "ipv4", 00:18:03.515 "trsvcid": "4420", 00:18:03.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.515 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:03.515 } 00:18:03.515 } 00:18:03.515 Got JSON-RPC error response 00:18:03.515 GoRPCClient: error on JSON-RPC call 00:18:03.515 02:34:44 -- target/tls.sh@36 -- # killprocess 78673 00:18:03.515 02:34:44 -- common/autotest_common.sh@936 -- # '[' -z 78673 ']' 00:18:03.515 02:34:44 -- common/autotest_common.sh@940 -- # kill -0 78673 00:18:03.515 02:34:44 -- common/autotest_common.sh@941 -- # uname 00:18:03.515 02:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.515 02:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78673 00:18:03.515 killing process with pid 78673 00:18:03.515 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.515 00:18:03.515 Latency(us) 00:18:03.515 [2024-11-21T02:34:44.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.515 [2024-11-21T02:34:44.162Z] =================================================================================================================== 00:18:03.515 [2024-11-21T02:34:44.162Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.515 02:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:03.515 02:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:03.515 02:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78673' 00:18:03.515 02:34:44 -- common/autotest_common.sh@955 -- # kill 78673 00:18:03.515 02:34:44 -- common/autotest_common.sh@960 -- # wait 78673 00:18:04.084 02:34:44 -- target/tls.sh@37 -- # return 1 00:18:04.084 02:34:44 -- common/autotest_common.sh@653 -- # es=1 00:18:04.084 02:34:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.084 02:34:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.084 02:34:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.084 02:34:44 -- target/tls.sh@183 -- # killprocess 78422 00:18:04.084 02:34:44 -- common/autotest_common.sh@936 -- # '[' -z 78422 ']' 00:18:04.084 02:34:44 -- common/autotest_common.sh@940 -- # kill -0 78422 00:18:04.084 02:34:44 -- common/autotest_common.sh@941 -- # uname 00:18:04.084 02:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.084 02:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78422 00:18:04.084 killing process with pid 78422 00:18:04.084 02:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.084 02:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.084 02:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78422' 00:18:04.084 02:34:44 -- common/autotest_common.sh@955 -- # kill 78422 00:18:04.084 02:34:44 -- common/autotest_common.sh@960 -- # wait 78422 00:18:04.084 02:34:44 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:04.084 02:34:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.084 02:34:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.084 02:34:44 -- common/autotest_common.sh@10 -- # set +x 00:18:04.084 02:34:44 -- nvmf/common.sh@469 -- # nvmfpid=78729 00:18:04.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.084 02:34:44 -- nvmf/common.sh@470 -- # waitforlisten 78729 00:18:04.084 02:34:44 -- common/autotest_common.sh@829 -- # '[' -z 78729 ']' 00:18:04.084 02:34:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.084 02:34:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.084 02:34:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.084 02:34:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.084 02:34:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.084 02:34:44 -- common/autotest_common.sh@10 -- # set +x 00:18:04.344 [2024-11-21 02:34:44.767800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:04.344 [2024-11-21 02:34:44.767898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.344 [2024-11-21 02:34:44.909666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.602 [2024-11-21 02:34:44.996824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.602 [2024-11-21 02:34:44.996978] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.602 [2024-11-21 02:34:44.996992] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.602 [2024-11-21 02:34:44.997001] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.602 [2024-11-21 02:34:44.997036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.169 02:34:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.169 02:34:45 -- common/autotest_common.sh@862 -- # return 0 00:18:05.169 02:34:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:05.169 02:34:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.169 02:34:45 -- common/autotest_common.sh@10 -- # set +x 00:18:05.169 02:34:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.169 02:34:45 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.169 02:34:45 -- common/autotest_common.sh@650 -- # local es=0 00:18:05.169 02:34:45 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.169 02:34:45 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:05.169 02:34:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.169 02:34:45 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:05.169 02:34:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:05.169 02:34:45 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.169 02:34:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:05.169 02:34:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.427 [2024-11-21 02:34:46.030274] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.427 02:34:46 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.685 02:34:46 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.943 [2024-11-21 02:34:46.490367] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.943 [2024-11-21 02:34:46.490616] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.943 02:34:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.201 malloc0 00:18:06.201 02:34:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.458 02:34:47 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:06.722 [2024-11-21 02:34:47.340210] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:06.722 [2024-11-21 02:34:47.340245] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:06.723 [2024-11-21 02:34:47.340264] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:06.723 2024/11/21 02:34:47 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:06.723 request: 00:18:06.723 { 00:18:06.723 "method": "nvmf_subsystem_add_host", 00:18:06.723 "params": { 00:18:06.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.723 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.723 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:06.723 } 00:18:06.723 } 00:18:06.723 Got JSON-RPC error response 00:18:06.723 GoRPCClient: error on JSON-RPC call 00:18:06.723 02:34:47 -- common/autotest_common.sh@653 -- # es=1 00:18:06.723 02:34:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.723 02:34:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.723 02:34:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.723 02:34:47 -- target/tls.sh@189 -- # killprocess 78729 00:18:06.723 02:34:47 -- common/autotest_common.sh@936 -- # '[' -z 78729 ']' 00:18:06.723 02:34:47 -- common/autotest_common.sh@940 -- # kill -0 78729 00:18:06.723 02:34:47 -- common/autotest_common.sh@941 -- # uname 00:18:06.982 02:34:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:06.982 02:34:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78729 00:18:06.982 02:34:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:06.982 killing process with pid 78729 00:18:06.982 02:34:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:06.982 02:34:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78729' 00:18:06.982 02:34:47 -- common/autotest_common.sh@955 -- # kill 78729 00:18:06.982 02:34:47 -- common/autotest_common.sh@960 -- # wait 78729 00:18:07.241 02:34:47 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:07.241 02:34:47 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:18:07.241 02:34:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:07.241 02:34:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.241 02:34:47 -- common/autotest_common.sh@10 -- # set +x 00:18:07.241 02:34:47 -- nvmf/common.sh@469 -- # nvmfpid=78840 00:18:07.241 02:34:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.241 02:34:47 -- nvmf/common.sh@470 -- # waitforlisten 78840 00:18:07.241 02:34:47 -- common/autotest_common.sh@829 -- # '[' -z 78840 ']' 00:18:07.241 02:34:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.241 02:34:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.241 02:34:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.241 02:34:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.241 02:34:47 -- common/autotest_common.sh@10 -- # set +x 00:18:07.241 [2024-11-21 02:34:47.752196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:07.241 [2024-11-21 02:34:47.752959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.241 [2024-11-21 02:34:47.883162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.500 [2024-11-21 02:34:47.966110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:07.500 [2024-11-21 02:34:47.966235] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.500 [2024-11-21 02:34:47.966247] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.500 [2024-11-21 02:34:47.966257] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.500 [2024-11-21 02:34:47.966285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.068 02:34:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.068 02:34:48 -- common/autotest_common.sh@862 -- # return 0 00:18:08.068 02:34:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:08.068 02:34:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:08.068 02:34:48 -- common/autotest_common.sh@10 -- # set +x 00:18:08.327 02:34:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.327 02:34:48 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:08.327 02:34:48 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:08.327 02:34:48 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:08.327 [2024-11-21 02:34:48.946982] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.327 02:34:48 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:08.585 02:34:49 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:08.843 [2024-11-21 02:34:49.423079] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.843 [2024-11-21 02:34:49.423361] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.843 02:34:49 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:09.101 malloc0 00:18:09.101 02:34:49 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:09.359 02:34:49 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:09.618 02:34:50 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.618 02:34:50 -- target/tls.sh@197 -- # bdevperf_pid=78937 00:18:09.618 02:34:50 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.618 02:34:50 -- target/tls.sh@200 -- # waitforlisten 78937 /var/tmp/bdevperf.sock 00:18:09.618 02:34:50 -- common/autotest_common.sh@829 -- # '[' -z 78937 ']' 00:18:09.618 02:34:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.618 02:34:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.618 02:34:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.618 02:34:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.618 02:34:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.618 [2024-11-21 02:34:50.088262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:09.618 [2024-11-21 02:34:50.088342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78937 ] 00:18:09.618 [2024-11-21 02:34:50.222772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.877 [2024-11-21 02:34:50.328407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.442 02:34:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.442 02:34:51 -- common/autotest_common.sh@862 -- # return 0 00:18:10.442 02:34:51 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:10.701 [2024-11-21 02:34:51.229844] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.701 TLSTESTn1 00:18:10.701 02:34:51 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:11.268 02:34:51 -- target/tls.sh@205 -- # tgtconf='{ 00:18:11.268 "subsystems": [ 00:18:11.268 { 00:18:11.268 "subsystem": "iobuf", 00:18:11.268 "config": [ 00:18:11.268 { 00:18:11.268 "method": "iobuf_set_options", 00:18:11.268 "params": { 00:18:11.268 "large_bufsize": 135168, 00:18:11.268 "large_pool_count": 1024, 00:18:11.268 "small_bufsize": 8192, 00:18:11.268 "small_pool_count": 8192 00:18:11.268 } 00:18:11.268 } 00:18:11.268 ] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "sock", 00:18:11.268 "config": [ 00:18:11.268 { 00:18:11.268 "method": "sock_impl_set_options", 00:18:11.268 "params": { 00:18:11.268 "enable_ktls": false, 00:18:11.268 "enable_placement_id": 0, 00:18:11.268 "enable_quickack": false, 00:18:11.268 "enable_recv_pipe": true, 00:18:11.268 "enable_zerocopy_send_client": false, 00:18:11.268 "enable_zerocopy_send_server": true, 00:18:11.268 "impl_name": "posix", 00:18:11.268 "recv_buf_size": 2097152, 00:18:11.268 "send_buf_size": 2097152, 00:18:11.268 "tls_version": 0, 00:18:11.268 "zerocopy_threshold": 0 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "sock_impl_set_options", 00:18:11.268 "params": { 00:18:11.268 "enable_ktls": false, 00:18:11.268 "enable_placement_id": 0, 00:18:11.268 "enable_quickack": false, 00:18:11.268 "enable_recv_pipe": true, 00:18:11.268 "enable_zerocopy_send_client": false, 00:18:11.268 "enable_zerocopy_send_server": true, 00:18:11.268 "impl_name": "ssl", 00:18:11.268 "recv_buf_size": 4096, 00:18:11.268 "send_buf_size": 4096, 00:18:11.268 "tls_version": 0, 00:18:11.268 "zerocopy_threshold": 0 00:18:11.268 } 00:18:11.268 } 00:18:11.268 ] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "vmd", 00:18:11.268 "config": [] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "accel", 00:18:11.268 "config": [ 00:18:11.268 { 00:18:11.268 "method": "accel_set_options", 00:18:11.268 "params": { 00:18:11.268 "buf_count": 2048, 00:18:11.268 "large_cache_size": 16, 00:18:11.268 "sequence_count": 2048, 00:18:11.268 "small_cache_size": 128, 00:18:11.268 "task_count": 2048 00:18:11.268 } 00:18:11.268 } 00:18:11.268 ] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "bdev", 00:18:11.268 "config": [ 00:18:11.268 { 00:18:11.268 "method": "bdev_set_options", 00:18:11.268 "params": { 00:18:11.268 "bdev_auto_examine": true, 00:18:11.268 "bdev_io_cache_size": 256, 00:18:11.268 "bdev_io_pool_size": 65535, 00:18:11.268 "iobuf_large_cache_size": 16, 00:18:11.268 "iobuf_small_cache_size": 128 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "bdev_raid_set_options", 00:18:11.268 "params": { 00:18:11.268 "process_window_size_kb": 1024 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "bdev_iscsi_set_options", 00:18:11.268 "params": { 00:18:11.268 "timeout_sec": 30 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "bdev_nvme_set_options", 00:18:11.268 "params": { 00:18:11.268 "action_on_timeout": "none", 00:18:11.268 "allow_accel_sequence": false, 00:18:11.268 "arbitration_burst": 0, 00:18:11.268 "bdev_retry_count": 3, 00:18:11.268 "ctrlr_loss_timeout_sec": 0, 00:18:11.268 "delay_cmd_submit": true, 00:18:11.268 "fast_io_fail_timeout_sec": 0, 00:18:11.268 "generate_uuids": false, 00:18:11.268 "high_priority_weight": 0, 00:18:11.268 "io_path_stat": false, 00:18:11.268 "io_queue_requests": 0, 00:18:11.268 "keep_alive_timeout_ms": 10000, 00:18:11.268 "low_priority_weight": 0, 00:18:11.268 "medium_priority_weight": 0, 00:18:11.268 "nvme_adminq_poll_period_us": 10000, 00:18:11.268 "nvme_ioq_poll_period_us": 0, 00:18:11.268 "reconnect_delay_sec": 0, 00:18:11.268 "timeout_admin_us": 0, 00:18:11.268 "timeout_us": 0, 00:18:11.268 "transport_ack_timeout": 0, 00:18:11.268 "transport_retry_count": 4, 00:18:11.268 "transport_tos": 0 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "bdev_nvme_set_hotplug", 00:18:11.268 "params": { 00:18:11.268 "enable": false, 00:18:11.268 "period_us": 100000 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "bdev_malloc_create", 00:18:11.268 "params": { 00:18:11.268 "block_size": 4096, 00:18:11.268 "name": "malloc0", 00:18:11.268 "num_blocks": 8192, 00:18:11.268 "optimal_io_boundary": 0, 00:18:11.268 "physical_block_size": 4096, 00:18:11.268 "uuid": "6540f33e-e9c0-4c77-b382-57b2f7456ebf" 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "bdev_wait_for_examine" 00:18:11.268 } 00:18:11.268 ] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "nbd", 00:18:11.268 "config": [] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "scheduler", 00:18:11.268 "config": [ 00:18:11.268 { 00:18:11.268 "method": "framework_set_scheduler", 00:18:11.268 "params": { 00:18:11.268 "name": "static" 00:18:11.268 } 00:18:11.268 } 00:18:11.268 ] 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "subsystem": "nvmf", 00:18:11.268 "config": [ 00:18:11.268 { 00:18:11.268 "method": "nvmf_set_config", 00:18:11.268 "params": { 00:18:11.268 "admin_cmd_passthru": { 00:18:11.268 "identify_ctrlr": false 00:18:11.268 }, 00:18:11.268 "discovery_filter": "match_any" 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "nvmf_set_max_subsystems", 00:18:11.268 "params": { 00:18:11.268 "max_subsystems": 1024 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "nvmf_set_crdt", 00:18:11.268 "params": { 00:18:11.268 "crdt1": 0, 00:18:11.268 "crdt2": 0, 00:18:11.268 "crdt3": 0 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "nvmf_create_transport", 00:18:11.268 "params": { 00:18:11.268 "abort_timeout_sec": 1, 00:18:11.268 "buf_cache_size": 4294967295, 00:18:11.268 "c2h_success": false, 00:18:11.268 "dif_insert_or_strip": false, 00:18:11.268 "in_capsule_data_size": 4096, 00:18:11.268 "io_unit_size": 131072, 00:18:11.268 "max_aq_depth": 128, 00:18:11.268 "max_io_qpairs_per_ctrlr": 127, 00:18:11.268 "max_io_size": 131072, 00:18:11.268 "max_queue_depth": 128, 00:18:11.268 "num_shared_buffers": 511, 00:18:11.268 "sock_priority": 0, 00:18:11.268 "trtype": "TCP", 00:18:11.268 "zcopy": false 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "method": "nvmf_create_subsystem", 00:18:11.268 "params": { 00:18:11.268 "allow_any_host": false, 00:18:11.268 "ana_reporting": false, 00:18:11.268 "max_cntlid": 65519, 00:18:11.268 "max_namespaces": 10, 00:18:11.268 "min_cntlid": 1, 00:18:11.268 "model_number": "SPDK bdev Controller", 00:18:11.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.268 "serial_number": "SPDK00000000000001" 00:18:11.268 } 00:18:11.268 }, 00:18:11.268 { 00:18:11.269 "method": "nvmf_subsystem_add_host", 00:18:11.269 "params": { 00:18:11.269 "host": "nqn.2016-06.io.spdk:host1", 00:18:11.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.269 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:11.269 } 00:18:11.269 }, 00:18:11.269 { 00:18:11.269 "method": "nvmf_subsystem_add_ns", 00:18:11.269 "params": { 00:18:11.269 "namespace": { 00:18:11.269 "bdev_name": "malloc0", 00:18:11.269 "nguid": "6540F33EE9C04C77B38257B2F7456EBF", 00:18:11.269 "nsid": 1, 00:18:11.269 "uuid": "6540f33e-e9c0-4c77-b382-57b2f7456ebf" 00:18:11.269 }, 00:18:11.269 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:11.269 } 00:18:11.269 }, 00:18:11.269 { 00:18:11.269 "method": "nvmf_subsystem_add_listener", 00:18:11.269 "params": { 00:18:11.269 "listen_address": { 00:18:11.269 "adrfam": "IPv4", 00:18:11.269 "traddr": "10.0.0.2", 00:18:11.269 "trsvcid": "4420", 00:18:11.269 "trtype": "TCP" 00:18:11.269 }, 00:18:11.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.269 "secure_channel": true 00:18:11.269 } 00:18:11.269 } 00:18:11.269 ] 00:18:11.269 } 00:18:11.269 ] 00:18:11.269 }' 00:18:11.269 02:34:51 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:11.528 02:34:51 -- target/tls.sh@206 -- # bdevperfconf='{ 00:18:11.528 "subsystems": [ 00:18:11.528 { 00:18:11.528 "subsystem": "iobuf", 00:18:11.528 "config": [ 00:18:11.528 { 00:18:11.528 "method": "iobuf_set_options", 00:18:11.528 "params": { 00:18:11.528 "large_bufsize": 135168, 00:18:11.528 "large_pool_count": 1024, 00:18:11.528 "small_bufsize": 8192, 00:18:11.528 "small_pool_count": 8192 00:18:11.528 } 00:18:11.528 } 00:18:11.528 ] 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "subsystem": "sock", 00:18:11.528 "config": [ 00:18:11.528 { 00:18:11.528 "method": "sock_impl_set_options", 00:18:11.528 "params": { 00:18:11.528 "enable_ktls": false, 00:18:11.528 "enable_placement_id": 0, 00:18:11.528 "enable_quickack": false, 00:18:11.528 "enable_recv_pipe": true, 00:18:11.528 "enable_zerocopy_send_client": false, 00:18:11.528 "enable_zerocopy_send_server": true, 00:18:11.528 "impl_name": "posix", 00:18:11.528 "recv_buf_size": 2097152, 00:18:11.528 "send_buf_size": 2097152, 00:18:11.528 "tls_version": 0, 00:18:11.528 "zerocopy_threshold": 0 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "sock_impl_set_options", 00:18:11.528 "params": { 00:18:11.528 "enable_ktls": false, 00:18:11.528 "enable_placement_id": 0, 00:18:11.528 "enable_quickack": false, 00:18:11.528 "enable_recv_pipe": true, 00:18:11.528 "enable_zerocopy_send_client": false, 00:18:11.528 "enable_zerocopy_send_server": true, 00:18:11.528 "impl_name": "ssl", 00:18:11.528 "recv_buf_size": 4096, 00:18:11.528 "send_buf_size": 4096, 00:18:11.528 "tls_version": 0, 00:18:11.528 "zerocopy_threshold": 0 00:18:11.528 } 00:18:11.528 } 00:18:11.528 ] 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "subsystem": "vmd", 00:18:11.528 "config": [] 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "subsystem": "accel", 00:18:11.528 "config": [ 00:18:11.528 { 00:18:11.528 "method": "accel_set_options", 00:18:11.528 "params": { 00:18:11.528 "buf_count": 2048, 00:18:11.528 "large_cache_size": 16, 00:18:11.528 "sequence_count": 2048, 00:18:11.528 "small_cache_size": 128, 00:18:11.528 "task_count": 2048 00:18:11.528 } 00:18:11.528 } 00:18:11.528 ] 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "subsystem": "bdev", 00:18:11.528 "config": [ 00:18:11.528 { 00:18:11.528 "method": "bdev_set_options", 00:18:11.528 "params": { 00:18:11.528 "bdev_auto_examine": true, 00:18:11.528 "bdev_io_cache_size": 256, 00:18:11.528 "bdev_io_pool_size": 65535, 00:18:11.528 "iobuf_large_cache_size": 16, 00:18:11.528 "iobuf_small_cache_size": 128 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "bdev_raid_set_options", 00:18:11.528 "params": { 00:18:11.528 "process_window_size_kb": 1024 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "bdev_iscsi_set_options", 00:18:11.528 "params": { 00:18:11.528 "timeout_sec": 30 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "bdev_nvme_set_options", 00:18:11.528 "params": { 00:18:11.528 "action_on_timeout": "none", 00:18:11.528 "allow_accel_sequence": false, 00:18:11.528 "arbitration_burst": 0, 00:18:11.528 "bdev_retry_count": 3, 00:18:11.528 "ctrlr_loss_timeout_sec": 0, 00:18:11.528 "delay_cmd_submit": true, 00:18:11.528 "fast_io_fail_timeout_sec": 0, 00:18:11.528 "generate_uuids": false, 00:18:11.528 "high_priority_weight": 0, 00:18:11.528 "io_path_stat": false, 00:18:11.528 "io_queue_requests": 512, 00:18:11.528 "keep_alive_timeout_ms": 10000, 00:18:11.528 "low_priority_weight": 0, 00:18:11.528 "medium_priority_weight": 0, 00:18:11.528 "nvme_adminq_poll_period_us": 10000, 00:18:11.528 "nvme_ioq_poll_period_us": 0, 00:18:11.528 "reconnect_delay_sec": 0, 00:18:11.528 "timeout_admin_us": 0, 00:18:11.528 "timeout_us": 0, 00:18:11.528 "transport_ack_timeout": 0, 00:18:11.528 "transport_retry_count": 4, 00:18:11.528 "transport_tos": 0 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "bdev_nvme_attach_controller", 00:18:11.528 "params": { 00:18:11.528 "adrfam": "IPv4", 00:18:11.528 "ctrlr_loss_timeout_sec": 0, 00:18:11.528 "ddgst": false, 00:18:11.528 "fast_io_fail_timeout_sec": 0, 00:18:11.528 "hdgst": false, 00:18:11.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.528 "name": "TLSTEST", 00:18:11.528 "prchk_guard": false, 00:18:11.528 "prchk_reftag": false, 00:18:11.528 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:11.528 "reconnect_delay_sec": 0, 00:18:11.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.528 "traddr": "10.0.0.2", 00:18:11.528 "trsvcid": "4420", 00:18:11.528 "trtype": "TCP" 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "bdev_nvme_set_hotplug", 00:18:11.528 "params": { 00:18:11.528 "enable": false, 00:18:11.528 "period_us": 100000 00:18:11.528 } 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "method": "bdev_wait_for_examine" 00:18:11.528 } 00:18:11.528 ] 00:18:11.528 }, 00:18:11.528 { 00:18:11.528 "subsystem": "nbd", 00:18:11.528 "config": [] 00:18:11.528 } 00:18:11.528 ] 00:18:11.528 }' 00:18:11.528 02:34:51 -- target/tls.sh@208 -- # killprocess 78937 00:18:11.528 02:34:51 -- common/autotest_common.sh@936 -- # '[' -z 78937 ']' 00:18:11.528 02:34:51 -- common/autotest_common.sh@940 -- # kill -0 78937 00:18:11.528 02:34:51 -- common/autotest_common.sh@941 -- # uname 00:18:11.528 02:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.528 02:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78937 00:18:11.528 02:34:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:11.528 killing process with pid 78937 00:18:11.528 02:34:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:11.528 02:34:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78937' 00:18:11.529 02:34:52 -- common/autotest_common.sh@955 -- # kill 78937 00:18:11.529 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.529 00:18:11.529 Latency(us) 00:18:11.529 [2024-11-21T02:34:52.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.529 [2024-11-21T02:34:52.176Z] =================================================================================================================== 00:18:11.529 [2024-11-21T02:34:52.176Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.529 02:34:52 -- common/autotest_common.sh@960 -- # wait 78937 00:18:11.788 02:34:52 -- target/tls.sh@209 -- # killprocess 78840 00:18:11.788 02:34:52 -- common/autotest_common.sh@936 -- # '[' -z 78840 ']' 00:18:11.788 02:34:52 -- common/autotest_common.sh@940 -- # kill -0 78840 00:18:11.788 02:34:52 -- common/autotest_common.sh@941 -- # uname 00:18:11.788 02:34:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.788 02:34:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78840 00:18:11.788 02:34:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:11.788 killing process with pid 78840 00:18:11.788 02:34:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:11.788 02:34:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78840' 00:18:11.788 02:34:52 -- common/autotest_common.sh@955 -- # kill 78840 00:18:11.788 02:34:52 -- common/autotest_common.sh@960 -- # wait 78840 00:18:12.048 02:34:52 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:12.048 02:34:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:12.048 02:34:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:12.048 02:34:52 -- target/tls.sh@212 -- # echo '{ 00:18:12.048 "subsystems": [ 00:18:12.048 { 00:18:12.048 "subsystem": "iobuf", 00:18:12.048 "config": [ 00:18:12.048 { 00:18:12.048 "method": "iobuf_set_options", 00:18:12.048 "params": { 00:18:12.048 "large_bufsize": 135168, 00:18:12.048 "large_pool_count": 1024, 00:18:12.048 "small_bufsize": 8192, 00:18:12.048 "small_pool_count": 8192 00:18:12.048 } 00:18:12.048 } 00:18:12.048 ] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "sock", 00:18:12.048 "config": [ 00:18:12.048 { 00:18:12.048 "method": "sock_impl_set_options", 00:18:12.048 "params": { 00:18:12.048 "enable_ktls": false, 00:18:12.048 "enable_placement_id": 0, 00:18:12.048 "enable_quickack": false, 00:18:12.048 "enable_recv_pipe": true, 00:18:12.048 "enable_zerocopy_send_client": false, 00:18:12.048 "enable_zerocopy_send_server": true, 00:18:12.048 "impl_name": "posix", 00:18:12.048 "recv_buf_size": 2097152, 00:18:12.048 "send_buf_size": 2097152, 00:18:12.048 "tls_version": 0, 00:18:12.048 "zerocopy_threshold": 0 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "sock_impl_set_options", 00:18:12.048 "params": { 00:18:12.048 "enable_ktls": false, 00:18:12.048 "enable_placement_id": 0, 00:18:12.048 "enable_quickack": false, 00:18:12.048 "enable_recv_pipe": true, 00:18:12.048 "enable_zerocopy_send_client": false, 00:18:12.048 "enable_zerocopy_send_server": true, 00:18:12.048 "impl_name": "ssl", 00:18:12.048 "recv_buf_size": 4096, 00:18:12.048 "send_buf_size": 4096, 00:18:12.048 "tls_version": 0, 00:18:12.048 "zerocopy_threshold": 0 00:18:12.048 } 00:18:12.048 } 00:18:12.048 ] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "vmd", 00:18:12.048 "config": [] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "accel", 00:18:12.048 "config": [ 00:18:12.048 { 00:18:12.048 "method": "accel_set_options", 00:18:12.048 "params": { 00:18:12.048 "buf_count": 2048, 00:18:12.048 "large_cache_size": 16, 00:18:12.048 "sequence_count": 2048, 00:18:12.048 "small_cache_size": 128, 00:18:12.048 "task_count": 2048 00:18:12.048 } 00:18:12.048 } 00:18:12.048 ] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "bdev", 00:18:12.048 "config": [ 00:18:12.048 { 00:18:12.048 "method": "bdev_set_options", 00:18:12.048 "params": { 00:18:12.048 "bdev_auto_examine": true, 00:18:12.048 "bdev_io_cache_size": 256, 00:18:12.048 "bdev_io_pool_size": 65535, 00:18:12.048 "iobuf_large_cache_size": 16, 00:18:12.048 "iobuf_small_cache_size": 128 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "bdev_raid_set_options", 00:18:12.048 "params": { 00:18:12.048 "process_window_size_kb": 1024 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "bdev_iscsi_set_options", 00:18:12.048 "params": { 00:18:12.048 "timeout_sec": 30 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "bdev_nvme_set_options", 00:18:12.048 "params": { 00:18:12.048 "action_on_timeout": "none", 00:18:12.048 "allow_accel_sequence": false, 00:18:12.048 "arbitration_burst": 0, 00:18:12.048 "bdev_retry_count": 3, 00:18:12.048 "ctrlr_loss_timeout_sec": 0, 00:18:12.048 "delay_cmd_submit": true, 00:18:12.048 "fast_io_fail_timeout_sec": 0, 00:18:12.048 "generate_uuids": false, 00:18:12.048 "high_priority_weight": 0, 00:18:12.048 "io_path_stat": false, 00:18:12.048 "io_queue_requests": 0, 00:18:12.048 "keep_alive_timeout_ms": 10000, 00:18:12.048 "low_priority_weight": 0, 00:18:12.048 "medium_priority_weight": 0, 00:18:12.048 "nvme_adminq_poll_period_us": 10000, 00:18:12.048 "nvme_ioq_poll_period_us": 0, 00:18:12.048 "reconnect_delay_sec": 0, 00:18:12.048 "timeout_admin_us": 0, 00:18:12.048 "timeout_us": 0, 00:18:12.048 "transport_ack_timeout": 0, 00:18:12.048 "transport_retry_count": 4, 00:18:12.048 "transport_tos": 0 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "bdev_nvme_set_hotplug", 00:18:12.048 "params": { 00:18:12.048 "enable": false, 00:18:12.048 "period_us": 100000 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "bdev_malloc_create", 00:18:12.048 "params": { 00:18:12.048 "block_size": 4096, 00:18:12.048 "name": "malloc0", 00:18:12.048 "num_blocks": 8192, 00:18:12.048 "optimal_io_boundary": 0, 00:18:12.048 "physical_block_size": 4096, 00:18:12.048 "uuid": "6540f33e-e9c0-4c77-b382-57b2f7456ebf" 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "bdev_wait_for_examine" 00:18:12.048 } 00:18:12.048 ] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "nbd", 00:18:12.048 "config": [] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "scheduler", 00:18:12.048 "config": [ 00:18:12.048 { 00:18:12.048 "method": "framework_set_scheduler", 00:18:12.048 "params": { 00:18:12.048 "name": "static" 00:18:12.048 } 00:18:12.048 } 00:18:12.048 ] 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "subsystem": "nvmf", 00:18:12.048 "config": [ 00:18:12.048 { 00:18:12.048 "method": "nvmf_set_config", 00:18:12.048 "params": { 00:18:12.048 "admin_cmd_passthru": { 00:18:12.048 "identify_ctrlr": false 00:18:12.048 }, 00:18:12.048 "discovery_filter": "match_any" 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "nvmf_set_max_subsystems", 00:18:12.048 "params": { 00:18:12.048 "max_subsystems": 1024 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "nvmf_set_crdt", 00:18:12.048 "params": { 00:18:12.048 "crdt1": 0, 00:18:12.048 "crdt2": 0, 00:18:12.048 "crdt3": 0 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "nvmf_create_transport", 00:18:12.048 "params": { 00:18:12.048 "abort_timeout_sec": 1, 00:18:12.048 "buf_cache_size": 4294967295, 00:18:12.048 "c2h_success": false, 00:18:12.048 "dif_insert_or_strip": false, 00:18:12.048 "in_capsule_data_size": 4096, 00:18:12.048 "io_unit_size": 131072, 00:18:12.048 "max_aq_depth": 128, 00:18:12.048 "max_io_qpairs_per_ctrlr": 127, 00:18:12.048 "max_io_size": 131072, 00:18:12.048 "max_queue_depth": 128, 00:18:12.048 "num_shared_buffers": 511, 00:18:12.048 "sock_priority": 0, 00:18:12.048 "trtype": "TCP", 00:18:12.048 "zcopy": false 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "nvmf_create_subsystem", 00:18:12.048 "params": { 00:18:12.048 "allow_any_host": false, 00:18:12.048 "ana_reporting": false, 00:18:12.048 "max_cntlid": 65519, 00:18:12.048 "max_namespaces": 10, 00:18:12.048 "min_cntlid": 1, 00:18:12.048 "model_number": "SPDK bdev Controller", 00:18:12.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.048 "serial_number": "SPDK00000000000001" 00:18:12.048 } 00:18:12.048 }, 00:18:12.048 { 00:18:12.048 "method": "nvmf_subsystem_add_host", 00:18:12.048 "params": { 00:18:12.048 "host": "nqn.2016-06.io.spdk:host1", 00:18:12.048 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.048 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:12.049 } 00:18:12.049 }, 00:18:12.049 { 00:18:12.049 "method": "nvmf_subsystem_add_ns", 00:18:12.049 "params": { 00:18:12.049 "namespace": { 00:18:12.049 "bdev_name": "malloc0", 00:18:12.049 "nguid": "6540F33EE9C04C77B38257B2F7456EBF", 00:18:12.049 "nsid": 1, 00:18:12.049 "uuid": "6540f33e-e9c0-4c77-b382-57b2f7456ebf" 00:18:12.049 }, 00:18:12.049 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:12.049 } 00:18:12.049 }, 00:18:12.049 { 00:18:12.049 "method": "nvmf_subsystem_add_listener", 00:18:12.049 "params": { 00:18:12.049 "listen_address": { 00:18:12.049 "adrfam": "IPv4", 00:18:12.049 "traddr": "10.0.0.2", 00:18:12.049 "trsvcid": "4420", 00:18:12.049 "trtype": "TCP" 00:18:12.049 }, 00:18:12.049 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.049 "secure_channel": true 00:18:12.049 } 00:18:12.049 } 00:18:12.049 ] 00:18:12.049 } 00:18:12.049 ] 00:18:12.049 }' 00:18:12.049 02:34:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.049 02:34:52 -- nvmf/common.sh@469 -- # nvmfpid=79010 00:18:12.049 02:34:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:12.049 02:34:52 -- nvmf/common.sh@470 -- # waitforlisten 79010 00:18:12.049 02:34:52 -- common/autotest_common.sh@829 -- # '[' -z 79010 ']' 00:18:12.049 02:34:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.049 02:34:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.049 02:34:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.049 02:34:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.049 02:34:52 -- common/autotest_common.sh@10 -- # set +x 00:18:12.308 [2024-11-21 02:34:52.726600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:12.308 [2024-11-21 02:34:52.726689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.308 [2024-11-21 02:34:52.866738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.308 [2024-11-21 02:34:52.946373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:12.308 [2024-11-21 02:34:52.946521] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.308 [2024-11-21 02:34:52.946537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.308 [2024-11-21 02:34:52.946546] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.308 [2024-11-21 02:34:52.946586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.567 [2024-11-21 02:34:53.194358] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.826 [2024-11-21 02:34:53.226326] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.826 [2024-11-21 02:34:53.226569] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.087 02:34:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.087 02:34:53 -- common/autotest_common.sh@862 -- # return 0 00:18:13.087 02:34:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:13.087 02:34:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.087 02:34:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.087 02:34:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.087 02:34:53 -- target/tls.sh@216 -- # bdevperf_pid=79059 00:18:13.087 02:34:53 -- target/tls.sh@217 -- # waitforlisten 79059 /var/tmp/bdevperf.sock 00:18:13.087 02:34:53 -- common/autotest_common.sh@829 -- # '[' -z 79059 ']' 00:18:13.087 02:34:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.087 02:34:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.087 02:34:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.087 02:34:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.087 02:34:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.087 02:34:53 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:13.087 02:34:53 -- target/tls.sh@213 -- # echo '{ 00:18:13.087 "subsystems": [ 00:18:13.087 { 00:18:13.087 "subsystem": "iobuf", 00:18:13.087 "config": [ 00:18:13.087 { 00:18:13.087 "method": "iobuf_set_options", 00:18:13.087 "params": { 00:18:13.087 "large_bufsize": 135168, 00:18:13.087 "large_pool_count": 1024, 00:18:13.087 "small_bufsize": 8192, 00:18:13.087 "small_pool_count": 8192 00:18:13.087 } 00:18:13.087 } 00:18:13.087 ] 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "subsystem": "sock", 00:18:13.087 "config": [ 00:18:13.087 { 00:18:13.087 "method": "sock_impl_set_options", 00:18:13.087 "params": { 00:18:13.087 "enable_ktls": false, 00:18:13.087 "enable_placement_id": 0, 00:18:13.087 "enable_quickack": false, 00:18:13.087 "enable_recv_pipe": true, 00:18:13.087 "enable_zerocopy_send_client": false, 00:18:13.087 "enable_zerocopy_send_server": true, 00:18:13.087 "impl_name": "posix", 00:18:13.087 "recv_buf_size": 2097152, 00:18:13.087 "send_buf_size": 2097152, 00:18:13.087 "tls_version": 0, 00:18:13.087 "zerocopy_threshold": 0 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "sock_impl_set_options", 00:18:13.087 "params": { 00:18:13.087 "enable_ktls": false, 00:18:13.087 "enable_placement_id": 0, 00:18:13.087 "enable_quickack": false, 00:18:13.087 "enable_recv_pipe": true, 00:18:13.087 "enable_zerocopy_send_client": false, 00:18:13.087 "enable_zerocopy_send_server": true, 00:18:13.087 "impl_name": "ssl", 00:18:13.087 "recv_buf_size": 4096, 00:18:13.087 "send_buf_size": 4096, 00:18:13.087 "tls_version": 0, 00:18:13.087 "zerocopy_threshold": 0 00:18:13.087 } 00:18:13.087 } 00:18:13.087 ] 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "subsystem": "vmd", 00:18:13.087 "config": [] 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "subsystem": "accel", 00:18:13.087 "config": [ 00:18:13.087 { 00:18:13.087 "method": "accel_set_options", 00:18:13.087 "params": { 00:18:13.087 "buf_count": 2048, 00:18:13.087 "large_cache_size": 16, 00:18:13.087 "sequence_count": 2048, 00:18:13.087 "small_cache_size": 128, 00:18:13.087 "task_count": 2048 00:18:13.087 } 00:18:13.087 } 00:18:13.087 ] 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "subsystem": "bdev", 00:18:13.087 "config": [ 00:18:13.087 { 00:18:13.087 "method": "bdev_set_options", 00:18:13.087 "params": { 00:18:13.087 "bdev_auto_examine": true, 00:18:13.087 "bdev_io_cache_size": 256, 00:18:13.087 "bdev_io_pool_size": 65535, 00:18:13.087 "iobuf_large_cache_size": 16, 00:18:13.087 "iobuf_small_cache_size": 128 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "bdev_raid_set_options", 00:18:13.087 "params": { 00:18:13.087 "process_window_size_kb": 1024 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "bdev_iscsi_set_options", 00:18:13.087 "params": { 00:18:13.087 "timeout_sec": 30 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "bdev_nvme_set_options", 00:18:13.087 "params": { 00:18:13.087 "action_on_timeout": "none", 00:18:13.087 "allow_accel_sequence": false, 00:18:13.087 "arbitration_burst": 0, 00:18:13.087 "bdev_retry_count": 3, 00:18:13.087 "ctrlr_loss_timeout_sec": 0, 00:18:13.087 "delay_cmd_submit": true, 00:18:13.087 "fast_io_fail_timeout_sec": 0, 00:18:13.087 "generate_uuids": false, 00:18:13.087 "high_priority_weight": 0, 00:18:13.087 "io_path_stat": false, 00:18:13.087 "io_queue_requests": 512, 00:18:13.087 "keep_alive_timeout_ms": 10000, 00:18:13.087 "low_priority_weight": 0, 00:18:13.087 "medium_priority_weight": 0, 00:18:13.087 "nvme_adminq_poll_period_us": 10000, 00:18:13.087 "nvme_ioq_poll_period_us": 0, 00:18:13.087 "reconnect_delay_sec": 0, 00:18:13.087 "timeout_admin_us": 0, 00:18:13.087 "timeout_us": 0, 00:18:13.087 "transport_ack_timeout": 0, 00:18:13.087 "transport_retry_count": 4, 00:18:13.087 "transport_tos": 0 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "bdev_nvme_attach_controller", 00:18:13.087 "params": { 00:18:13.087 "adrfam": "IPv4", 00:18:13.087 "ctrlr_loss_timeout_sec": 0, 00:18:13.087 "ddgst": false, 00:18:13.087 "fast_io_fail_timeout_sec": 0, 00:18:13.087 "hdgst": false, 00:18:13.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.087 "name": "TLSTEST", 00:18:13.087 "prchk_guard": false, 00:18:13.087 "prchk_reftag": false, 00:18:13.087 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:13.087 "reconnect_delay_sec": 0, 00:18:13.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.087 "traddr": "10.0.0.2", 00:18:13.087 "trsvcid": "4420", 00:18:13.087 "trtype": "TCP" 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "bdev_nvme_set_hotplug", 00:18:13.087 "params": { 00:18:13.087 "enable": false, 00:18:13.087 "period_us": 100000 00:18:13.087 } 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "method": "bdev_wait_for_examine" 00:18:13.087 } 00:18:13.087 ] 00:18:13.087 }, 00:18:13.087 { 00:18:13.087 "subsystem": "nbd", 00:18:13.088 "config": [] 00:18:13.088 } 00:18:13.088 ] 00:18:13.088 }' 00:18:13.347 [2024-11-21 02:34:53.764605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:13.347 [2024-11-21 02:34:53.764687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79059 ] 00:18:13.347 [2024-11-21 02:34:53.902228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.607 [2024-11-21 02:34:54.004300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.607 [2024-11-21 02:34:54.154902] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.174 02:34:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.174 02:34:54 -- common/autotest_common.sh@862 -- # return 0 00:18:14.174 02:34:54 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.174 Running I/O for 10 seconds... 00:18:26.386 00:18:26.386 Latency(us) 00:18:26.386 [2024-11-21T02:35:07.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.386 [2024-11-21T02:35:07.033Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:26.386 Verification LBA range: start 0x0 length 0x2000 00:18:26.386 TLSTESTn1 : 10.01 5488.54 21.44 0.00 0.00 23288.92 4408.79 23712.12 00:18:26.386 [2024-11-21T02:35:07.033Z] =================================================================================================================== 00:18:26.386 [2024-11-21T02:35:07.033Z] Total : 5488.54 21.44 0.00 0.00 23288.92 4408.79 23712.12 00:18:26.386 0 00:18:26.386 02:35:04 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.386 02:35:04 -- target/tls.sh@223 -- # killprocess 79059 00:18:26.386 02:35:04 -- common/autotest_common.sh@936 -- # '[' -z 79059 ']' 00:18:26.386 02:35:04 -- common/autotest_common.sh@940 -- # kill -0 79059 00:18:26.386 02:35:04 -- common/autotest_common.sh@941 -- # uname 00:18:26.386 02:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.386 02:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79059 00:18:26.386 killing process with pid 79059 00:18:26.386 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.386 00:18:26.386 Latency(us) 00:18:26.386 [2024-11-21T02:35:07.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.387 [2024-11-21T02:35:07.034Z] =================================================================================================================== 00:18:26.387 [2024-11-21T02:35:07.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.387 02:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:26.387 02:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:26.387 02:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79059' 00:18:26.387 02:35:04 -- common/autotest_common.sh@955 -- # kill 79059 00:18:26.387 02:35:04 -- common/autotest_common.sh@960 -- # wait 79059 00:18:26.387 02:35:05 -- target/tls.sh@224 -- # killprocess 79010 00:18:26.387 02:35:05 -- common/autotest_common.sh@936 -- # '[' -z 79010 ']' 00:18:26.387 02:35:05 -- common/autotest_common.sh@940 -- # kill -0 79010 00:18:26.387 02:35:05 -- common/autotest_common.sh@941 -- # uname 00:18:26.387 02:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.387 02:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79010 00:18:26.387 killing process with pid 79010 00:18:26.387 02:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:26.387 02:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:26.387 02:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79010' 00:18:26.387 02:35:05 -- common/autotest_common.sh@955 -- # kill 79010 00:18:26.387 02:35:05 -- common/autotest_common.sh@960 -- # wait 79010 00:18:26.387 02:35:05 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:26.387 02:35:05 -- target/tls.sh@227 -- # cleanup 00:18:26.387 02:35:05 -- target/tls.sh@15 -- # process_shm --id 0 00:18:26.387 02:35:05 -- common/autotest_common.sh@806 -- # type=--id 00:18:26.387 02:35:05 -- common/autotest_common.sh@807 -- # id=0 00:18:26.387 02:35:05 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:26.387 02:35:05 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:26.387 02:35:05 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:26.387 02:35:05 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:26.387 02:35:05 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:26.387 02:35:05 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:26.387 nvmf_trace.0 00:18:26.387 02:35:05 -- common/autotest_common.sh@821 -- # return 0 00:18:26.387 02:35:05 -- target/tls.sh@16 -- # killprocess 79059 00:18:26.387 02:35:05 -- common/autotest_common.sh@936 -- # '[' -z 79059 ']' 00:18:26.387 02:35:05 -- common/autotest_common.sh@940 -- # kill -0 79059 00:18:26.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79059) - No such process 00:18:26.387 Process with pid 79059 is not found 00:18:26.387 02:35:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79059 is not found' 00:18:26.387 02:35:05 -- target/tls.sh@17 -- # nvmftestfini 00:18:26.387 02:35:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:26.387 02:35:05 -- nvmf/common.sh@116 -- # sync 00:18:26.387 02:35:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:26.387 02:35:05 -- nvmf/common.sh@119 -- # set +e 00:18:26.387 02:35:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:26.387 02:35:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:26.387 rmmod nvme_tcp 00:18:26.387 rmmod nvme_fabrics 00:18:26.387 rmmod nvme_keyring 00:18:26.387 02:35:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:26.387 02:35:05 -- nvmf/common.sh@123 -- # set -e 00:18:26.387 02:35:05 -- nvmf/common.sh@124 -- # return 0 00:18:26.387 02:35:05 -- nvmf/common.sh@477 -- # '[' -n 79010 ']' 00:18:26.387 02:35:05 -- nvmf/common.sh@478 -- # killprocess 79010 00:18:26.387 02:35:05 -- common/autotest_common.sh@936 -- # '[' -z 79010 ']' 00:18:26.387 Process with pid 79010 is not found 00:18:26.387 02:35:05 -- common/autotest_common.sh@940 -- # kill -0 79010 00:18:26.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79010) - No such process 00:18:26.387 02:35:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79010 is not found' 00:18:26.387 02:35:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:26.387 02:35:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:26.387 02:35:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:26.387 02:35:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.387 02:35:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:26.387 02:35:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.387 02:35:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.387 02:35:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.387 02:35:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:26.387 02:35:05 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:26.387 00:18:26.387 real 1m12.256s 00:18:26.387 user 1m47.461s 00:18:26.387 sys 0m27.383s 00:18:26.387 02:35:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:26.387 02:35:05 -- common/autotest_common.sh@10 -- # set +x 00:18:26.387 ************************************ 00:18:26.387 END TEST nvmf_tls 00:18:26.387 ************************************ 00:18:26.387 02:35:05 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:26.387 02:35:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:26.387 02:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.387 02:35:05 -- common/autotest_common.sh@10 -- # set +x 00:18:26.387 ************************************ 00:18:26.387 START TEST nvmf_fips 00:18:26.387 ************************************ 00:18:26.387 02:35:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:26.387 * Looking for test storage... 00:18:26.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:26.387 02:35:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:26.387 02:35:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:26.387 02:35:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:26.387 02:35:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:26.387 02:35:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:26.387 02:35:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:26.387 02:35:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:26.387 02:35:05 -- scripts/common.sh@335 -- # IFS=.-: 00:18:26.387 02:35:05 -- scripts/common.sh@335 -- # read -ra ver1 00:18:26.387 02:35:05 -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.387 02:35:05 -- scripts/common.sh@336 -- # read -ra ver2 00:18:26.387 02:35:05 -- scripts/common.sh@337 -- # local 'op=<' 00:18:26.387 02:35:05 -- scripts/common.sh@339 -- # ver1_l=2 00:18:26.387 02:35:05 -- scripts/common.sh@340 -- # ver2_l=1 00:18:26.387 02:35:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:26.387 02:35:05 -- scripts/common.sh@343 -- # case "$op" in 00:18:26.387 02:35:05 -- scripts/common.sh@344 -- # : 1 00:18:26.387 02:35:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:26.387 02:35:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.387 02:35:05 -- scripts/common.sh@364 -- # decimal 1 00:18:26.387 02:35:05 -- scripts/common.sh@352 -- # local d=1 00:18:26.387 02:35:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.387 02:35:05 -- scripts/common.sh@354 -- # echo 1 00:18:26.387 02:35:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:26.387 02:35:05 -- scripts/common.sh@365 -- # decimal 2 00:18:26.387 02:35:05 -- scripts/common.sh@352 -- # local d=2 00:18:26.387 02:35:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.387 02:35:05 -- scripts/common.sh@354 -- # echo 2 00:18:26.387 02:35:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:26.387 02:35:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:26.387 02:35:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:26.387 02:35:05 -- scripts/common.sh@367 -- # return 0 00:18:26.387 02:35:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.387 02:35:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:26.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.387 --rc genhtml_branch_coverage=1 00:18:26.387 --rc genhtml_function_coverage=1 00:18:26.387 --rc genhtml_legend=1 00:18:26.387 --rc geninfo_all_blocks=1 00:18:26.387 --rc geninfo_unexecuted_blocks=1 00:18:26.387 00:18:26.387 ' 00:18:26.387 02:35:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:26.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.387 --rc genhtml_branch_coverage=1 00:18:26.387 --rc genhtml_function_coverage=1 00:18:26.387 --rc genhtml_legend=1 00:18:26.387 --rc geninfo_all_blocks=1 00:18:26.387 --rc geninfo_unexecuted_blocks=1 00:18:26.387 00:18:26.387 ' 00:18:26.387 02:35:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:26.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.387 --rc genhtml_branch_coverage=1 00:18:26.387 --rc genhtml_function_coverage=1 00:18:26.387 --rc genhtml_legend=1 00:18:26.387 --rc geninfo_all_blocks=1 00:18:26.387 --rc geninfo_unexecuted_blocks=1 00:18:26.387 00:18:26.387 ' 00:18:26.387 02:35:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:26.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.387 --rc genhtml_branch_coverage=1 00:18:26.387 --rc genhtml_function_coverage=1 00:18:26.387 --rc genhtml_legend=1 00:18:26.387 --rc geninfo_all_blocks=1 00:18:26.387 --rc geninfo_unexecuted_blocks=1 00:18:26.387 00:18:26.387 ' 00:18:26.388 02:35:05 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.388 02:35:05 -- nvmf/common.sh@7 -- # uname -s 00:18:26.388 02:35:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.388 02:35:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.388 02:35:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.388 02:35:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.388 02:35:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.388 02:35:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.388 02:35:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.388 02:35:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.388 02:35:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.388 02:35:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.388 02:35:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:18:26.388 02:35:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:18:26.388 02:35:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.388 02:35:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.388 02:35:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.388 02:35:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.388 02:35:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.388 02:35:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.388 02:35:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.388 02:35:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.388 02:35:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.388 02:35:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.388 02:35:05 -- paths/export.sh@5 -- # export PATH 00:18:26.388 02:35:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.388 02:35:05 -- nvmf/common.sh@46 -- # : 0 00:18:26.388 02:35:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:26.388 02:35:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:26.388 02:35:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:26.388 02:35:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.388 02:35:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.388 02:35:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:26.388 02:35:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:26.388 02:35:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:26.388 02:35:05 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.388 02:35:05 -- fips/fips.sh@89 -- # check_openssl_version 00:18:26.388 02:35:05 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:26.388 02:35:05 -- fips/fips.sh@85 -- # openssl version 00:18:26.388 02:35:05 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:26.388 02:35:05 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:26.388 02:35:05 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:26.388 02:35:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:26.388 02:35:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:26.388 02:35:05 -- scripts/common.sh@335 -- # IFS=.-: 00:18:26.388 02:35:05 -- scripts/common.sh@335 -- # read -ra ver1 00:18:26.388 02:35:05 -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.388 02:35:05 -- scripts/common.sh@336 -- # read -ra ver2 00:18:26.388 02:35:05 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:26.388 02:35:05 -- scripts/common.sh@339 -- # ver1_l=3 00:18:26.388 02:35:05 -- scripts/common.sh@340 -- # ver2_l=3 00:18:26.388 02:35:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:26.388 02:35:05 -- scripts/common.sh@343 -- # case "$op" in 00:18:26.388 02:35:05 -- scripts/common.sh@347 -- # : 1 00:18:26.388 02:35:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:26.388 02:35:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.388 02:35:05 -- scripts/common.sh@364 -- # decimal 3 00:18:26.388 02:35:05 -- scripts/common.sh@352 -- # local d=3 00:18:26.388 02:35:05 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:26.388 02:35:05 -- scripts/common.sh@354 -- # echo 3 00:18:26.388 02:35:05 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:26.388 02:35:05 -- scripts/common.sh@365 -- # decimal 3 00:18:26.388 02:35:05 -- scripts/common.sh@352 -- # local d=3 00:18:26.388 02:35:05 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:26.388 02:35:05 -- scripts/common.sh@354 -- # echo 3 00:18:26.388 02:35:05 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:26.388 02:35:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:26.388 02:35:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:26.388 02:35:05 -- scripts/common.sh@363 -- # (( v++ )) 00:18:26.388 02:35:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.388 02:35:05 -- scripts/common.sh@364 -- # decimal 1 00:18:26.388 02:35:05 -- scripts/common.sh@352 -- # local d=1 00:18:26.388 02:35:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.388 02:35:05 -- scripts/common.sh@354 -- # echo 1 00:18:26.388 02:35:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:26.388 02:35:05 -- scripts/common.sh@365 -- # decimal 0 00:18:26.388 02:35:05 -- scripts/common.sh@352 -- # local d=0 00:18:26.388 02:35:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:26.388 02:35:05 -- scripts/common.sh@354 -- # echo 0 00:18:26.388 02:35:05 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:26.388 02:35:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:26.388 02:35:05 -- scripts/common.sh@366 -- # return 0 00:18:26.388 02:35:05 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:26.388 02:35:05 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:26.388 02:35:05 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:26.388 02:35:05 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:26.388 02:35:05 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:26.388 02:35:05 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:26.388 02:35:05 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:26.388 02:35:05 -- fips/fips.sh@113 -- # build_openssl_config 00:18:26.388 02:35:05 -- fips/fips.sh@37 -- # cat 00:18:26.388 02:35:05 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:26.388 02:35:05 -- fips/fips.sh@58 -- # cat - 00:18:26.388 02:35:05 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:26.388 02:35:05 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:26.388 02:35:05 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:26.388 02:35:05 -- fips/fips.sh@116 -- # openssl list -providers 00:18:26.388 02:35:05 -- fips/fips.sh@116 -- # grep name 00:18:26.388 02:35:06 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:26.388 02:35:06 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:26.388 02:35:06 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:26.388 02:35:06 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:26.388 02:35:06 -- fips/fips.sh@127 -- # : 00:18:26.388 02:35:06 -- common/autotest_common.sh@650 -- # local es=0 00:18:26.388 02:35:06 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:26.388 02:35:06 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:26.388 02:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.388 02:35:06 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:26.388 02:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.388 02:35:06 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:26.388 02:35:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:26.388 02:35:06 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:26.388 02:35:06 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:26.388 02:35:06 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:26.388 Error setting digest 00:18:26.388 40826241837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:26.388 40826241837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:26.388 02:35:06 -- common/autotest_common.sh@653 -- # es=1 00:18:26.388 02:35:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:26.388 02:35:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:26.388 02:35:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:26.388 02:35:06 -- fips/fips.sh@130 -- # nvmftestinit 00:18:26.388 02:35:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:26.389 02:35:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.389 02:35:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:26.389 02:35:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:26.389 02:35:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:26.389 02:35:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.389 02:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.389 02:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.389 02:35:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:26.389 02:35:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.389 02:35:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.389 02:35:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:26.389 02:35:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:26.389 02:35:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.389 02:35:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.389 02:35:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.389 02:35:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.389 02:35:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.389 02:35:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.389 02:35:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.389 02:35:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.389 02:35:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:26.389 02:35:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:26.389 Cannot find device "nvmf_tgt_br" 00:18:26.389 02:35:06 -- nvmf/common.sh@154 -- # true 00:18:26.389 02:35:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.389 Cannot find device "nvmf_tgt_br2" 00:18:26.389 02:35:06 -- nvmf/common.sh@155 -- # true 00:18:26.389 02:35:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:26.389 02:35:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:26.389 Cannot find device "nvmf_tgt_br" 00:18:26.389 02:35:06 -- nvmf/common.sh@157 -- # true 00:18:26.389 02:35:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:26.389 Cannot find device "nvmf_tgt_br2" 00:18:26.389 02:35:06 -- nvmf/common.sh@158 -- # true 00:18:26.389 02:35:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:26.389 02:35:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:26.389 02:35:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.389 02:35:06 -- nvmf/common.sh@161 -- # true 00:18:26.389 02:35:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.389 02:35:06 -- nvmf/common.sh@162 -- # true 00:18:26.389 02:35:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.389 02:35:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.389 02:35:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.389 02:35:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.389 02:35:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:26.389 02:35:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.389 02:35:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.389 02:35:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:26.389 02:35:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:26.389 02:35:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:26.389 02:35:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:26.389 02:35:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:26.389 02:35:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:26.389 02:35:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:26.389 02:35:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:26.389 02:35:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:26.389 02:35:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:26.389 02:35:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:26.389 02:35:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.389 02:35:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.389 02:35:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.389 02:35:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.389 02:35:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.389 02:35:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:26.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:26.389 00:18:26.389 --- 10.0.0.2 ping statistics --- 00:18:26.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.389 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:26.389 02:35:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:26.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:18:26.389 00:18:26.389 --- 10.0.0.3 ping statistics --- 00:18:26.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.389 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:26.389 02:35:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:26.389 00:18:26.389 --- 10.0.0.1 ping statistics --- 00:18:26.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.389 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:26.389 02:35:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.389 02:35:06 -- nvmf/common.sh@421 -- # return 0 00:18:26.389 02:35:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.389 02:35:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.389 02:35:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:26.389 02:35:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.389 02:35:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:26.389 02:35:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:26.389 02:35:06 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:26.389 02:35:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.389 02:35:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.389 02:35:06 -- common/autotest_common.sh@10 -- # set +x 00:18:26.389 02:35:06 -- nvmf/common.sh@469 -- # nvmfpid=79425 00:18:26.389 02:35:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:26.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.389 02:35:06 -- nvmf/common.sh@470 -- # waitforlisten 79425 00:18:26.389 02:35:06 -- common/autotest_common.sh@829 -- # '[' -z 79425 ']' 00:18:26.389 02:35:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.389 02:35:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.389 02:35:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.389 02:35:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.389 02:35:06 -- common/autotest_common.sh@10 -- # set +x 00:18:26.389 [2024-11-21 02:35:06.505657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:26.389 [2024-11-21 02:35:06.505722] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.389 [2024-11-21 02:35:06.634980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.389 [2024-11-21 02:35:06.722582] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.389 [2024-11-21 02:35:06.722754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.389 [2024-11-21 02:35:06.722772] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.389 [2024-11-21 02:35:06.722781] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.389 [2024-11-21 02:35:06.722812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.957 02:35:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.957 02:35:07 -- common/autotest_common.sh@862 -- # return 0 00:18:26.957 02:35:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:26.957 02:35:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:26.957 02:35:07 -- common/autotest_common.sh@10 -- # set +x 00:18:26.957 02:35:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.957 02:35:07 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:26.957 02:35:07 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:26.957 02:35:07 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:26.957 02:35:07 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:26.957 02:35:07 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:26.957 02:35:07 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:26.957 02:35:07 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:26.957 02:35:07 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.219 [2024-11-21 02:35:07.740369] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.219 [2024-11-21 02:35:07.756346] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.219 [2024-11-21 02:35:07.756551] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.219 malloc0 00:18:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.219 02:35:07 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.219 02:35:07 -- fips/fips.sh@147 -- # bdevperf_pid=79481 00:18:27.219 02:35:07 -- fips/fips.sh@148 -- # waitforlisten 79481 /var/tmp/bdevperf.sock 00:18:27.219 02:35:07 -- common/autotest_common.sh@829 -- # '[' -z 79481 ']' 00:18:27.219 02:35:07 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.219 02:35:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.219 02:35:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.219 02:35:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.219 02:35:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.219 02:35:07 -- common/autotest_common.sh@10 -- # set +x 00:18:27.557 [2024-11-21 02:35:07.904537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:27.557 [2024-11-21 02:35:07.904625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79481 ] 00:18:27.557 [2024-11-21 02:35:08.044181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.557 [2024-11-21 02:35:08.130724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.492 02:35:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.492 02:35:08 -- common/autotest_common.sh@862 -- # return 0 00:18:28.492 02:35:08 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:28.492 [2024-11-21 02:35:09.133203] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.750 TLSTESTn1 00:18:28.750 02:35:09 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.750 Running I/O for 10 seconds... 00:18:38.726 00:18:38.726 Latency(us) 00:18:38.726 [2024-11-21T02:35:19.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.726 [2024-11-21T02:35:19.373Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:38.726 Verification LBA range: start 0x0 length 0x2000 00:18:38.726 TLSTESTn1 : 10.02 5908.76 23.08 0.00 0.00 21625.50 4587.52 30027.40 00:18:38.726 [2024-11-21T02:35:19.373Z] =================================================================================================================== 00:18:38.726 [2024-11-21T02:35:19.373Z] Total : 5908.76 23.08 0.00 0.00 21625.50 4587.52 30027.40 00:18:38.726 0 00:18:38.984 02:35:19 -- fips/fips.sh@1 -- # cleanup 00:18:38.984 02:35:19 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:38.984 02:35:19 -- common/autotest_common.sh@806 -- # type=--id 00:18:38.984 02:35:19 -- common/autotest_common.sh@807 -- # id=0 00:18:38.984 02:35:19 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:38.984 02:35:19 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:38.984 02:35:19 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:38.984 02:35:19 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:38.984 02:35:19 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:38.984 02:35:19 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:38.984 nvmf_trace.0 00:18:38.984 02:35:19 -- common/autotest_common.sh@821 -- # return 0 00:18:38.984 02:35:19 -- fips/fips.sh@16 -- # killprocess 79481 00:18:38.984 02:35:19 -- common/autotest_common.sh@936 -- # '[' -z 79481 ']' 00:18:38.984 02:35:19 -- common/autotest_common.sh@940 -- # kill -0 79481 00:18:38.984 02:35:19 -- common/autotest_common.sh@941 -- # uname 00:18:38.984 02:35:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.984 02:35:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79481 00:18:38.984 02:35:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:38.984 02:35:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:38.984 02:35:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79481' 00:18:38.984 killing process with pid 79481 00:18:38.984 Received shutdown signal, test time was about 10.000000 seconds 00:18:38.984 00:18:38.984 Latency(us) 00:18:38.984 [2024-11-21T02:35:19.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.984 [2024-11-21T02:35:19.631Z] =================================================================================================================== 00:18:38.984 [2024-11-21T02:35:19.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.984 02:35:19 -- common/autotest_common.sh@955 -- # kill 79481 00:18:38.984 02:35:19 -- common/autotest_common.sh@960 -- # wait 79481 00:18:39.243 02:35:19 -- fips/fips.sh@17 -- # nvmftestfini 00:18:39.243 02:35:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:39.243 02:35:19 -- nvmf/common.sh@116 -- # sync 00:18:39.243 02:35:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:39.243 02:35:19 -- nvmf/common.sh@119 -- # set +e 00:18:39.243 02:35:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:39.243 02:35:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:39.243 rmmod nvme_tcp 00:18:39.243 rmmod nvme_fabrics 00:18:39.502 rmmod nvme_keyring 00:18:39.502 02:35:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:39.502 02:35:19 -- nvmf/common.sh@123 -- # set -e 00:18:39.502 02:35:19 -- nvmf/common.sh@124 -- # return 0 00:18:39.502 02:35:19 -- nvmf/common.sh@477 -- # '[' -n 79425 ']' 00:18:39.502 02:35:19 -- nvmf/common.sh@478 -- # killprocess 79425 00:18:39.502 02:35:19 -- common/autotest_common.sh@936 -- # '[' -z 79425 ']' 00:18:39.502 02:35:19 -- common/autotest_common.sh@940 -- # kill -0 79425 00:18:39.502 02:35:19 -- common/autotest_common.sh@941 -- # uname 00:18:39.502 02:35:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.502 02:35:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79425 00:18:39.502 killing process with pid 79425 00:18:39.502 02:35:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:39.502 02:35:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:39.502 02:35:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79425' 00:18:39.502 02:35:19 -- common/autotest_common.sh@955 -- # kill 79425 00:18:39.502 02:35:19 -- common/autotest_common.sh@960 -- # wait 79425 00:18:39.761 02:35:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:39.761 02:35:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:39.761 02:35:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:39.761 02:35:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.761 02:35:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:39.761 02:35:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.761 02:35:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.761 02:35:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.761 02:35:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:39.761 02:35:20 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:39.761 00:18:39.761 real 0m14.525s 00:18:39.761 user 0m18.722s 00:18:39.761 sys 0m6.499s 00:18:39.761 02:35:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:39.761 02:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 ************************************ 00:18:39.761 END TEST nvmf_fips 00:18:39.761 ************************************ 00:18:39.761 02:35:20 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:39.761 02:35:20 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:39.761 02:35:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.761 02:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.761 02:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 ************************************ 00:18:39.761 START TEST nvmf_fuzz 00:18:39.761 ************************************ 00:18:39.762 02:35:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:39.762 * Looking for test storage... 00:18:39.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:39.762 02:35:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:39.762 02:35:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:39.762 02:35:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:40.022 02:35:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:40.022 02:35:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:40.022 02:35:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:40.022 02:35:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:40.022 02:35:20 -- scripts/common.sh@335 -- # IFS=.-: 00:18:40.022 02:35:20 -- scripts/common.sh@335 -- # read -ra ver1 00:18:40.022 02:35:20 -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.022 02:35:20 -- scripts/common.sh@336 -- # read -ra ver2 00:18:40.022 02:35:20 -- scripts/common.sh@337 -- # local 'op=<' 00:18:40.022 02:35:20 -- scripts/common.sh@339 -- # ver1_l=2 00:18:40.022 02:35:20 -- scripts/common.sh@340 -- # ver2_l=1 00:18:40.022 02:35:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:40.022 02:35:20 -- scripts/common.sh@343 -- # case "$op" in 00:18:40.022 02:35:20 -- scripts/common.sh@344 -- # : 1 00:18:40.022 02:35:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:40.022 02:35:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.022 02:35:20 -- scripts/common.sh@364 -- # decimal 1 00:18:40.022 02:35:20 -- scripts/common.sh@352 -- # local d=1 00:18:40.022 02:35:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.022 02:35:20 -- scripts/common.sh@354 -- # echo 1 00:18:40.022 02:35:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:40.022 02:35:20 -- scripts/common.sh@365 -- # decimal 2 00:18:40.022 02:35:20 -- scripts/common.sh@352 -- # local d=2 00:18:40.022 02:35:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.022 02:35:20 -- scripts/common.sh@354 -- # echo 2 00:18:40.022 02:35:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:40.022 02:35:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:40.022 02:35:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:40.022 02:35:20 -- scripts/common.sh@367 -- # return 0 00:18:40.022 02:35:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.022 02:35:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:40.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.022 --rc genhtml_branch_coverage=1 00:18:40.022 --rc genhtml_function_coverage=1 00:18:40.022 --rc genhtml_legend=1 00:18:40.022 --rc geninfo_all_blocks=1 00:18:40.022 --rc geninfo_unexecuted_blocks=1 00:18:40.022 00:18:40.022 ' 00:18:40.022 02:35:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:40.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.022 --rc genhtml_branch_coverage=1 00:18:40.022 --rc genhtml_function_coverage=1 00:18:40.022 --rc genhtml_legend=1 00:18:40.022 --rc geninfo_all_blocks=1 00:18:40.022 --rc geninfo_unexecuted_blocks=1 00:18:40.022 00:18:40.022 ' 00:18:40.022 02:35:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:40.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.022 --rc genhtml_branch_coverage=1 00:18:40.022 --rc genhtml_function_coverage=1 00:18:40.022 --rc genhtml_legend=1 00:18:40.022 --rc geninfo_all_blocks=1 00:18:40.022 --rc geninfo_unexecuted_blocks=1 00:18:40.022 00:18:40.022 ' 00:18:40.022 02:35:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:40.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.022 --rc genhtml_branch_coverage=1 00:18:40.022 --rc genhtml_function_coverage=1 00:18:40.022 --rc genhtml_legend=1 00:18:40.022 --rc geninfo_all_blocks=1 00:18:40.022 --rc geninfo_unexecuted_blocks=1 00:18:40.022 00:18:40.022 ' 00:18:40.022 02:35:20 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.022 02:35:20 -- nvmf/common.sh@7 -- # uname -s 00:18:40.022 02:35:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.022 02:35:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.022 02:35:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.022 02:35:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.022 02:35:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.022 02:35:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.022 02:35:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.022 02:35:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.022 02:35:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.022 02:35:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.022 02:35:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:18:40.022 02:35:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:18:40.022 02:35:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.022 02:35:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.022 02:35:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.022 02:35:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.022 02:35:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.022 02:35:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.022 02:35:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.022 02:35:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.022 02:35:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.023 02:35:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.023 02:35:20 -- paths/export.sh@5 -- # export PATH 00:18:40.023 02:35:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.023 02:35:20 -- nvmf/common.sh@46 -- # : 0 00:18:40.023 02:35:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.023 02:35:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.023 02:35:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.023 02:35:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.023 02:35:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.023 02:35:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.023 02:35:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.023 02:35:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.023 02:35:20 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:40.023 02:35:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.023 02:35:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.023 02:35:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.023 02:35:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.023 02:35:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.023 02:35:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.023 02:35:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.023 02:35:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.023 02:35:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:40.023 02:35:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:40.023 02:35:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:40.023 02:35:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:40.023 02:35:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:40.023 02:35:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:40.023 02:35:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.023 02:35:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.023 02:35:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:40.023 02:35:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:40.023 02:35:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.023 02:35:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.023 02:35:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.023 02:35:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.023 02:35:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.023 02:35:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.023 02:35:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.023 02:35:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.023 02:35:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:40.023 02:35:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:40.023 Cannot find device "nvmf_tgt_br" 00:18:40.023 02:35:20 -- nvmf/common.sh@154 -- # true 00:18:40.023 02:35:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.023 Cannot find device "nvmf_tgt_br2" 00:18:40.023 02:35:20 -- nvmf/common.sh@155 -- # true 00:18:40.023 02:35:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:40.023 02:35:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:40.023 Cannot find device "nvmf_tgt_br" 00:18:40.023 02:35:20 -- nvmf/common.sh@157 -- # true 00:18:40.023 02:35:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:40.023 Cannot find device "nvmf_tgt_br2" 00:18:40.023 02:35:20 -- nvmf/common.sh@158 -- # true 00:18:40.023 02:35:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:40.023 02:35:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:40.283 02:35:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.283 02:35:20 -- nvmf/common.sh@161 -- # true 00:18:40.283 02:35:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.283 02:35:20 -- nvmf/common.sh@162 -- # true 00:18:40.283 02:35:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.283 02:35:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.283 02:35:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.283 02:35:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.283 02:35:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.283 02:35:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.283 02:35:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.283 02:35:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:40.283 02:35:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:40.283 02:35:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:40.283 02:35:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:40.283 02:35:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:40.283 02:35:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:40.283 02:35:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.283 02:35:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.283 02:35:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.283 02:35:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:40.283 02:35:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:40.283 02:35:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.283 02:35:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.283 02:35:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.283 02:35:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.283 02:35:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.283 02:35:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:40.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:40.283 00:18:40.283 --- 10.0.0.2 ping statistics --- 00:18:40.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.283 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:40.283 02:35:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:40.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:40.283 00:18:40.283 --- 10.0.0.3 ping statistics --- 00:18:40.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.283 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:40.283 02:35:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:40.283 00:18:40.283 --- 10.0.0.1 ping statistics --- 00:18:40.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.283 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:40.283 02:35:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.283 02:35:20 -- nvmf/common.sh@421 -- # return 0 00:18:40.283 02:35:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:40.283 02:35:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.283 02:35:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:40.283 02:35:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:40.283 02:35:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.283 02:35:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:40.283 02:35:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:40.283 02:35:20 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=79832 00:18:40.283 02:35:20 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:40.283 02:35:20 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:40.283 02:35:20 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 79832 00:18:40.283 02:35:20 -- common/autotest_common.sh@829 -- # '[' -z 79832 ']' 00:18:40.283 02:35:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.283 02:35:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.283 02:35:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.283 02:35:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.283 02:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:41.662 02:35:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.662 02:35:21 -- common/autotest_common.sh@862 -- # return 0 00:18:41.662 02:35:21 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:41.662 02:35:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.662 02:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:41.662 02:35:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.662 02:35:21 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:41.662 02:35:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.662 02:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:41.662 Malloc0 00:18:41.662 02:35:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.662 02:35:21 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:41.662 02:35:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.662 02:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:41.662 02:35:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.662 02:35:22 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.662 02:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.662 02:35:22 -- common/autotest_common.sh@10 -- # set +x 00:18:41.662 02:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.662 02:35:22 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.662 02:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.662 02:35:22 -- common/autotest_common.sh@10 -- # set +x 00:18:41.662 02:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.662 02:35:22 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:41.662 02:35:22 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:41.922 Shutting down the fuzz application 00:18:41.922 02:35:22 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:42.181 Shutting down the fuzz application 00:18:42.181 02:35:22 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.181 02:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.181 02:35:22 -- common/autotest_common.sh@10 -- # set +x 00:18:42.181 02:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.181 02:35:22 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:42.181 02:35:22 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:42.181 02:35:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:42.181 02:35:22 -- nvmf/common.sh@116 -- # sync 00:18:42.181 02:35:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:42.181 02:35:22 -- nvmf/common.sh@119 -- # set +e 00:18:42.181 02:35:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:42.181 02:35:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:42.181 rmmod nvme_tcp 00:18:42.181 rmmod nvme_fabrics 00:18:42.440 rmmod nvme_keyring 00:18:42.440 02:35:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:42.440 02:35:22 -- nvmf/common.sh@123 -- # set -e 00:18:42.440 02:35:22 -- nvmf/common.sh@124 -- # return 0 00:18:42.440 02:35:22 -- nvmf/common.sh@477 -- # '[' -n 79832 ']' 00:18:42.440 02:35:22 -- nvmf/common.sh@478 -- # killprocess 79832 00:18:42.440 02:35:22 -- common/autotest_common.sh@936 -- # '[' -z 79832 ']' 00:18:42.440 02:35:22 -- common/autotest_common.sh@940 -- # kill -0 79832 00:18:42.440 02:35:22 -- common/autotest_common.sh@941 -- # uname 00:18:42.440 02:35:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.440 02:35:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79832 00:18:42.440 02:35:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:42.440 02:35:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:42.440 killing process with pid 79832 00:18:42.440 02:35:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79832' 00:18:42.440 02:35:22 -- common/autotest_common.sh@955 -- # kill 79832 00:18:42.440 02:35:22 -- common/autotest_common.sh@960 -- # wait 79832 00:18:42.699 02:35:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:42.699 02:35:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:42.699 02:35:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:42.699 02:35:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.699 02:35:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:42.699 02:35:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.699 02:35:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.699 02:35:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.699 02:35:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:42.699 02:35:23 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:42.699 00:18:42.699 real 0m2.997s 00:18:42.699 user 0m3.115s 00:18:42.699 sys 0m0.727s 00:18:42.699 02:35:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:42.699 02:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:42.699 ************************************ 00:18:42.699 END TEST nvmf_fuzz 00:18:42.699 ************************************ 00:18:42.699 02:35:23 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:42.699 02:35:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:42.699 02:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.699 02:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:42.699 ************************************ 00:18:42.699 START TEST nvmf_multiconnection 00:18:42.699 ************************************ 00:18:42.699 02:35:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:42.959 * Looking for test storage... 00:18:42.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:42.959 02:35:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:42.959 02:35:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:42.959 02:35:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:42.959 02:35:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:42.959 02:35:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:42.959 02:35:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:42.959 02:35:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:42.959 02:35:23 -- scripts/common.sh@335 -- # IFS=.-: 00:18:42.959 02:35:23 -- scripts/common.sh@335 -- # read -ra ver1 00:18:42.959 02:35:23 -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.959 02:35:23 -- scripts/common.sh@336 -- # read -ra ver2 00:18:42.959 02:35:23 -- scripts/common.sh@337 -- # local 'op=<' 00:18:42.959 02:35:23 -- scripts/common.sh@339 -- # ver1_l=2 00:18:42.959 02:35:23 -- scripts/common.sh@340 -- # ver2_l=1 00:18:42.959 02:35:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:42.959 02:35:23 -- scripts/common.sh@343 -- # case "$op" in 00:18:42.959 02:35:23 -- scripts/common.sh@344 -- # : 1 00:18:42.959 02:35:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:42.959 02:35:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.959 02:35:23 -- scripts/common.sh@364 -- # decimal 1 00:18:42.959 02:35:23 -- scripts/common.sh@352 -- # local d=1 00:18:42.959 02:35:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.959 02:35:23 -- scripts/common.sh@354 -- # echo 1 00:18:42.959 02:35:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:42.959 02:35:23 -- scripts/common.sh@365 -- # decimal 2 00:18:42.959 02:35:23 -- scripts/common.sh@352 -- # local d=2 00:18:42.959 02:35:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.959 02:35:23 -- scripts/common.sh@354 -- # echo 2 00:18:42.959 02:35:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:42.959 02:35:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:42.959 02:35:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:42.959 02:35:23 -- scripts/common.sh@367 -- # return 0 00:18:42.959 02:35:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.959 02:35:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:42.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.959 --rc genhtml_branch_coverage=1 00:18:42.959 --rc genhtml_function_coverage=1 00:18:42.959 --rc genhtml_legend=1 00:18:42.959 --rc geninfo_all_blocks=1 00:18:42.959 --rc geninfo_unexecuted_blocks=1 00:18:42.959 00:18:42.959 ' 00:18:42.959 02:35:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:42.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.959 --rc genhtml_branch_coverage=1 00:18:42.959 --rc genhtml_function_coverage=1 00:18:42.959 --rc genhtml_legend=1 00:18:42.959 --rc geninfo_all_blocks=1 00:18:42.959 --rc geninfo_unexecuted_blocks=1 00:18:42.959 00:18:42.959 ' 00:18:42.959 02:35:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:42.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.959 --rc genhtml_branch_coverage=1 00:18:42.959 --rc genhtml_function_coverage=1 00:18:42.959 --rc genhtml_legend=1 00:18:42.959 --rc geninfo_all_blocks=1 00:18:42.959 --rc geninfo_unexecuted_blocks=1 00:18:42.959 00:18:42.959 ' 00:18:42.959 02:35:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:42.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.959 --rc genhtml_branch_coverage=1 00:18:42.959 --rc genhtml_function_coverage=1 00:18:42.959 --rc genhtml_legend=1 00:18:42.959 --rc geninfo_all_blocks=1 00:18:42.959 --rc geninfo_unexecuted_blocks=1 00:18:42.959 00:18:42.959 ' 00:18:42.959 02:35:23 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.959 02:35:23 -- nvmf/common.sh@7 -- # uname -s 00:18:42.959 02:35:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.959 02:35:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.959 02:35:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.959 02:35:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.959 02:35:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.959 02:35:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.959 02:35:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.959 02:35:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.959 02:35:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.959 02:35:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.959 02:35:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:18:42.959 02:35:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:18:42.959 02:35:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.960 02:35:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.960 02:35:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.960 02:35:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.960 02:35:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.960 02:35:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.960 02:35:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.960 02:35:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.960 02:35:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.960 02:35:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.960 02:35:23 -- paths/export.sh@5 -- # export PATH 00:18:42.960 02:35:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.960 02:35:23 -- nvmf/common.sh@46 -- # : 0 00:18:42.960 02:35:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:42.960 02:35:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:42.960 02:35:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:42.960 02:35:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.960 02:35:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.960 02:35:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:42.960 02:35:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:42.960 02:35:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:42.960 02:35:23 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.960 02:35:23 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.960 02:35:23 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:42.960 02:35:23 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:42.960 02:35:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:42.960 02:35:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.960 02:35:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:42.960 02:35:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:42.960 02:35:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:42.960 02:35:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.960 02:35:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.960 02:35:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.960 02:35:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:42.960 02:35:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:42.960 02:35:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:42.960 02:35:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:42.960 02:35:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:42.960 02:35:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:42.960 02:35:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.960 02:35:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.960 02:35:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:42.960 02:35:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:42.960 02:35:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.960 02:35:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.960 02:35:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.960 02:35:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.960 02:35:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.960 02:35:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.960 02:35:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.960 02:35:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.960 02:35:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:42.960 02:35:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:42.960 Cannot find device "nvmf_tgt_br" 00:18:42.960 02:35:23 -- nvmf/common.sh@154 -- # true 00:18:42.960 02:35:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.960 Cannot find device "nvmf_tgt_br2" 00:18:42.960 02:35:23 -- nvmf/common.sh@155 -- # true 00:18:42.960 02:35:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:42.960 02:35:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:43.219 Cannot find device "nvmf_tgt_br" 00:18:43.219 02:35:23 -- nvmf/common.sh@157 -- # true 00:18:43.219 02:35:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:43.219 Cannot find device "nvmf_tgt_br2" 00:18:43.219 02:35:23 -- nvmf/common.sh@158 -- # true 00:18:43.219 02:35:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:43.219 02:35:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:43.219 02:35:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.219 02:35:23 -- nvmf/common.sh@161 -- # true 00:18:43.219 02:35:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.219 02:35:23 -- nvmf/common.sh@162 -- # true 00:18:43.219 02:35:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.219 02:35:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.219 02:35:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.219 02:35:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.219 02:35:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.219 02:35:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.219 02:35:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.219 02:35:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:43.219 02:35:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:43.219 02:35:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:43.219 02:35:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:43.219 02:35:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:43.219 02:35:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:43.219 02:35:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.219 02:35:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.219 02:35:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.219 02:35:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:43.219 02:35:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:43.219 02:35:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.219 02:35:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.219 02:35:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.219 02:35:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.219 02:35:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.219 02:35:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:43.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:18:43.219 00:18:43.219 --- 10.0.0.2 ping statistics --- 00:18:43.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.219 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:43.478 02:35:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:43.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:43.478 00:18:43.478 --- 10.0.0.3 ping statistics --- 00:18:43.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.478 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:43.478 02:35:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:18:43.478 00:18:43.478 --- 10.0.0.1 ping statistics --- 00:18:43.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.478 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:43.478 02:35:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.478 02:35:23 -- nvmf/common.sh@421 -- # return 0 00:18:43.478 02:35:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:43.478 02:35:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.478 02:35:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:43.478 02:35:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:43.479 02:35:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.479 02:35:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:43.479 02:35:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:43.479 02:35:23 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:43.479 02:35:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:43.479 02:35:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.479 02:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:43.479 02:35:23 -- nvmf/common.sh@469 -- # nvmfpid=80050 00:18:43.479 02:35:23 -- nvmf/common.sh@470 -- # waitforlisten 80050 00:18:43.479 02:35:23 -- common/autotest_common.sh@829 -- # '[' -z 80050 ']' 00:18:43.479 02:35:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:43.479 02:35:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.479 02:35:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.479 02:35:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.479 02:35:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.479 02:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:43.479 [2024-11-21 02:35:23.963397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:43.479 [2024-11-21 02:35:23.963493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.479 [2024-11-21 02:35:24.103082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:43.737 [2024-11-21 02:35:24.191082] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:43.737 [2024-11-21 02:35:24.191237] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.737 [2024-11-21 02:35:24.191252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.737 [2024-11-21 02:35:24.191263] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.737 [2024-11-21 02:35:24.191415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.737 [2024-11-21 02:35:24.191556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.737 [2024-11-21 02:35:24.192234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.737 [2024-11-21 02:35:24.192297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.676 02:35:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.676 02:35:24 -- common/autotest_common.sh@862 -- # return 0 00:18:44.676 02:35:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:44.676 02:35:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.676 02:35:24 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.676 02:35:24 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:44.676 02:35:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:24 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 [2024-11-21 02:35:25.010641] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.676 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 Malloc1 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 [2024-11-21 02:35:25.093671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.676 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 Malloc2 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.676 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 Malloc3 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.676 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 Malloc4 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.676 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 Malloc5 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:44.676 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.676 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.676 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:44.676 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.676 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 Malloc6 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.016 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 Malloc7 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.016 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 Malloc8 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.016 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 Malloc9 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.016 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 Malloc10 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:45.016 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.016 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.016 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.016 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:45.017 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.017 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.017 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.017 02:35:25 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.017 02:35:25 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:45.017 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.017 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.017 Malloc11 00:18:45.017 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.017 02:35:25 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:45.017 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.017 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.017 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.017 02:35:25 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:45.017 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.017 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.017 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.017 02:35:25 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:45.017 02:35:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.017 02:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:45.017 02:35:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.017 02:35:25 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:45.017 02:35:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.017 02:35:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:45.279 02:35:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:45.279 02:35:25 -- common/autotest_common.sh@1187 -- # local i=0 00:18:45.279 02:35:25 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:45.279 02:35:25 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:45.279 02:35:25 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:47.814 02:35:27 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:47.814 02:35:27 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:47.814 02:35:27 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:47.814 02:35:27 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:47.814 02:35:27 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.814 02:35:27 -- common/autotest_common.sh@1197 -- # return 0 00:18:47.814 02:35:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.814 02:35:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:47.814 02:35:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:47.814 02:35:28 -- common/autotest_common.sh@1187 -- # local i=0 00:18:47.814 02:35:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.814 02:35:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:47.814 02:35:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:49.736 02:35:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:49.736 02:35:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:49.736 02:35:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:49.736 02:35:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:49.736 02:35:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.736 02:35:30 -- common/autotest_common.sh@1197 -- # return 0 00:18:49.736 02:35:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.736 02:35:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:49.736 02:35:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:49.736 02:35:30 -- common/autotest_common.sh@1187 -- # local i=0 00:18:49.736 02:35:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.736 02:35:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:49.736 02:35:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:51.643 02:35:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:51.643 02:35:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:51.643 02:35:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:51.643 02:35:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:51.643 02:35:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.643 02:35:32 -- common/autotest_common.sh@1197 -- # return 0 00:18:51.643 02:35:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.643 02:35:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:51.901 02:35:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:51.901 02:35:32 -- common/autotest_common.sh@1187 -- # local i=0 00:18:51.902 02:35:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.902 02:35:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:51.902 02:35:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:53.805 02:35:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:53.805 02:35:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:53.805 02:35:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:53.805 02:35:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:53.805 02:35:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.805 02:35:34 -- common/autotest_common.sh@1197 -- # return 0 00:18:53.805 02:35:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.805 02:35:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:54.065 02:35:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:54.065 02:35:34 -- common/autotest_common.sh@1187 -- # local i=0 00:18:54.065 02:35:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.065 02:35:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:54.065 02:35:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:56.598 02:35:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:56.598 02:35:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:56.598 02:35:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:56.598 02:35:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:56.598 02:35:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.598 02:35:36 -- common/autotest_common.sh@1197 -- # return 0 00:18:56.598 02:35:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.598 02:35:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:56.598 02:35:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:56.598 02:35:36 -- common/autotest_common.sh@1187 -- # local i=0 00:18:56.598 02:35:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.598 02:35:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:56.598 02:35:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:58.502 02:35:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:58.502 02:35:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:58.502 02:35:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:58.502 02:35:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:58.502 02:35:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.502 02:35:38 -- common/autotest_common.sh@1197 -- # return 0 00:18:58.502 02:35:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.502 02:35:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:58.502 02:35:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:58.502 02:35:39 -- common/autotest_common.sh@1187 -- # local i=0 00:18:58.502 02:35:39 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.502 02:35:39 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:58.502 02:35:39 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:00.405 02:35:41 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:00.405 02:35:41 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:00.405 02:35:41 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:19:00.665 02:35:41 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:00.665 02:35:41 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.665 02:35:41 -- common/autotest_common.sh@1197 -- # return 0 00:19:00.665 02:35:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.665 02:35:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:00.665 02:35:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:00.665 02:35:41 -- common/autotest_common.sh@1187 -- # local i=0 00:19:00.665 02:35:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.665 02:35:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:00.665 02:35:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:03.197 02:35:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:03.197 02:35:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:03.197 02:35:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:19:03.197 02:35:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:03.197 02:35:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.197 02:35:43 -- common/autotest_common.sh@1197 -- # return 0 00:19:03.197 02:35:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.197 02:35:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:03.197 02:35:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:03.198 02:35:43 -- common/autotest_common.sh@1187 -- # local i=0 00:19:03.198 02:35:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.198 02:35:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:03.198 02:35:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:05.104 02:35:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:05.104 02:35:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:05.104 02:35:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:19:05.104 02:35:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:05.105 02:35:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.105 02:35:45 -- common/autotest_common.sh@1197 -- # return 0 00:19:05.105 02:35:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.105 02:35:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:05.105 02:35:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:05.105 02:35:45 -- common/autotest_common.sh@1187 -- # local i=0 00:19:05.105 02:35:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.105 02:35:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:05.105 02:35:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:07.643 02:35:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:07.643 02:35:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:07.643 02:35:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:19:07.643 02:35:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:07.643 02:35:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.643 02:35:47 -- common/autotest_common.sh@1197 -- # return 0 00:19:07.643 02:35:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.643 02:35:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:07.643 02:35:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:07.643 02:35:47 -- common/autotest_common.sh@1187 -- # local i=0 00:19:07.643 02:35:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.643 02:35:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:07.643 02:35:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:09.546 02:35:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:09.546 02:35:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:09.546 02:35:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:19:09.546 02:35:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:09.546 02:35:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.546 02:35:49 -- common/autotest_common.sh@1197 -- # return 0 00:19:09.546 02:35:49 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:09.546 [global] 00:19:09.546 thread=1 00:19:09.546 invalidate=1 00:19:09.546 rw=read 00:19:09.546 time_based=1 00:19:09.546 runtime=10 00:19:09.546 ioengine=libaio 00:19:09.546 direct=1 00:19:09.546 bs=262144 00:19:09.546 iodepth=64 00:19:09.546 norandommap=1 00:19:09.546 numjobs=1 00:19:09.546 00:19:09.546 [job0] 00:19:09.546 filename=/dev/nvme0n1 00:19:09.546 [job1] 00:19:09.546 filename=/dev/nvme10n1 00:19:09.546 [job2] 00:19:09.546 filename=/dev/nvme1n1 00:19:09.546 [job3] 00:19:09.546 filename=/dev/nvme2n1 00:19:09.546 [job4] 00:19:09.546 filename=/dev/nvme3n1 00:19:09.546 [job5] 00:19:09.546 filename=/dev/nvme4n1 00:19:09.546 [job6] 00:19:09.546 filename=/dev/nvme5n1 00:19:09.546 [job7] 00:19:09.546 filename=/dev/nvme6n1 00:19:09.546 [job8] 00:19:09.546 filename=/dev/nvme7n1 00:19:09.546 [job9] 00:19:09.546 filename=/dev/nvme8n1 00:19:09.546 [job10] 00:19:09.546 filename=/dev/nvme9n1 00:19:09.546 Could not set queue depth (nvme0n1) 00:19:09.546 Could not set queue depth (nvme10n1) 00:19:09.546 Could not set queue depth (nvme1n1) 00:19:09.546 Could not set queue depth (nvme2n1) 00:19:09.546 Could not set queue depth (nvme3n1) 00:19:09.546 Could not set queue depth (nvme4n1) 00:19:09.546 Could not set queue depth (nvme5n1) 00:19:09.546 Could not set queue depth (nvme6n1) 00:19:09.546 Could not set queue depth (nvme7n1) 00:19:09.546 Could not set queue depth (nvme8n1) 00:19:09.546 Could not set queue depth (nvme9n1) 00:19:09.546 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:09.546 fio-3.35 00:19:09.546 Starting 11 threads 00:19:21.758 00:19:21.758 job0: (groupid=0, jobs=1): err= 0: pid=80534: Thu Nov 21 02:36:00 2024 00:19:21.758 read: IOPS=565, BW=141MiB/s (148MB/s)(1422MiB/10057msec) 00:19:21.759 slat (usec): min=20, max=109861, avg=1683.37, stdev=6542.98 00:19:21.759 clat (msec): min=2, max=296, avg=111.32, stdev=31.11 00:19:21.759 lat (msec): min=2, max=296, avg=113.00, stdev=32.04 00:19:21.759 clat percentiles (msec): 00:19:21.759 | 1.00th=[ 20], 5.00th=[ 42], 10.00th=[ 73], 20.00th=[ 92], 00:19:21.759 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 116], 60.00th=[ 123], 00:19:21.759 | 70.00th=[ 128], 80.00th=[ 134], 90.00th=[ 142], 95.00th=[ 153], 00:19:21.759 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 209], 00:19:21.759 | 99.99th=[ 296] 00:19:21.759 bw ( KiB/s): min=97280, max=277504, per=8.87%, avg=143977.90, stdev=38844.47, samples=20 00:19:21.759 iops : min= 380, max= 1084, avg=562.35, stdev=151.70, samples=20 00:19:21.759 lat (msec) : 4=0.04%, 10=0.32%, 20=1.20%, 50=4.11%, 100=20.09% 00:19:21.759 lat (msec) : 250=74.23%, 500=0.02% 00:19:21.759 cpu : usr=0.25%, sys=1.84%, ctx=1112, majf=0, minf=4097 00:19:21.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.759 issued rwts: total=5689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.759 job1: (groupid=0, jobs=1): err= 0: pid=80535: Thu Nov 21 02:36:00 2024 00:19:21.759 read: IOPS=693, BW=173MiB/s (182MB/s)(1743MiB/10058msec) 00:19:21.759 slat (usec): min=16, max=65777, avg=1353.02, stdev=5061.01 00:19:21.759 clat (usec): min=1531, max=190732, avg=90829.82, stdev=34136.28 00:19:21.759 lat (usec): min=1619, max=198103, avg=92182.84, stdev=34902.07 00:19:21.759 clat percentiles (msec): 00:19:21.759 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 67], 00:19:21.759 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 99], 00:19:21.759 | 70.00th=[ 111], 80.00th=[ 126], 90.00th=[ 138], 95.00th=[ 144], 00:19:21.759 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 190], 00:19:21.759 | 99.99th=[ 190] 00:19:21.759 bw ( KiB/s): min=110882, max=420864, per=10.90%, avg=176821.65, stdev=69190.77, samples=20 00:19:21.759 iops : min= 433, max= 1644, avg=690.70, stdev=270.29, samples=20 00:19:21.759 lat (msec) : 2=0.03%, 4=0.24%, 10=0.57%, 20=1.16%, 50=9.70% 00:19:21.759 lat (msec) : 100=49.35%, 250=38.95% 00:19:21.759 cpu : usr=0.25%, sys=2.06%, ctx=1509, majf=0, minf=4097 00:19:21.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.759 issued rwts: total=6971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.759 job2: (groupid=0, jobs=1): err= 0: pid=80536: Thu Nov 21 02:36:00 2024 00:19:21.759 read: IOPS=512, BW=128MiB/s (134MB/s)(1296MiB/10106msec) 00:19:21.759 slat (usec): min=17, max=106252, avg=1906.82, stdev=6705.81 00:19:21.759 clat (msec): min=12, max=254, avg=122.68, stdev=30.97 00:19:21.759 lat (msec): min=14, max=259, avg=124.59, stdev=31.87 00:19:21.759 clat percentiles (msec): 00:19:21.759 | 1.00th=[ 23], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 99], 00:19:21.759 | 30.00th=[ 105], 40.00th=[ 115], 50.00th=[ 127], 60.00th=[ 136], 00:19:21.759 | 70.00th=[ 142], 80.00th=[ 148], 90.00th=[ 155], 95.00th=[ 163], 00:19:21.759 | 99.00th=[ 188], 99.50th=[ 230], 99.90th=[ 247], 99.95th=[ 249], 00:19:21.759 | 99.99th=[ 255] 00:19:21.759 bw ( KiB/s): min=75414, max=178176, per=8.07%, avg=131036.20, stdev=28423.94, samples=20 00:19:21.759 iops : min= 294, max= 696, avg=511.75, stdev=111.08, samples=20 00:19:21.759 lat (msec) : 20=0.42%, 50=2.55%, 100=21.96%, 250=75.03%, 500=0.04% 00:19:21.759 cpu : usr=0.14%, sys=1.76%, ctx=984, majf=0, minf=4097 00:19:21.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.759 issued rwts: total=5182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.759 job3: (groupid=0, jobs=1): err= 0: pid=80537: Thu Nov 21 02:36:00 2024 00:19:21.759 read: IOPS=477, BW=119MiB/s (125MB/s)(1206MiB/10107msec) 00:19:21.759 slat (usec): min=21, max=77068, avg=2021.62, stdev=6869.97 00:19:21.759 clat (msec): min=26, max=267, avg=131.90, stdev=24.58 00:19:21.759 lat (msec): min=27, max=267, avg=133.92, stdev=25.59 00:19:21.759 clat percentiles (msec): 00:19:21.759 | 1.00th=[ 71], 5.00th=[ 86], 10.00th=[ 99], 20.00th=[ 114], 00:19:21.759 | 30.00th=[ 125], 40.00th=[ 131], 50.00th=[ 136], 60.00th=[ 140], 00:19:21.759 | 70.00th=[ 144], 80.00th=[ 150], 90.00th=[ 159], 95.00th=[ 167], 00:19:21.759 | 99.00th=[ 194], 99.50th=[ 215], 99.90th=[ 249], 99.95th=[ 268], 00:19:21.759 | 99.99th=[ 268] 00:19:21.759 bw ( KiB/s): min=82944, max=180224, per=7.51%, avg=121816.60, stdev=21428.88, samples=20 00:19:21.759 iops : min= 324, max= 704, avg=475.80, stdev=83.74, samples=20 00:19:21.759 lat (msec) : 50=0.25%, 100=10.91%, 250=88.76%, 500=0.08% 00:19:21.759 cpu : usr=0.25%, sys=1.57%, ctx=962, majf=0, minf=4097 00:19:21.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.759 issued rwts: total=4823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.759 job4: (groupid=0, jobs=1): err= 0: pid=80538: Thu Nov 21 02:36:00 2024 00:19:21.759 read: IOPS=658, BW=165MiB/s (173MB/s)(1658MiB/10062msec) 00:19:21.759 slat (usec): min=14, max=90193, avg=1451.36, stdev=5326.32 00:19:21.759 clat (msec): min=6, max=204, avg=95.51, stdev=38.09 00:19:21.759 lat (msec): min=6, max=219, avg=96.96, stdev=38.86 00:19:21.759 clat percentiles (msec): 00:19:21.759 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 67], 00:19:21.759 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 112], 00:19:21.759 | 70.00th=[ 126], 80.00th=[ 136], 90.00th=[ 142], 95.00th=[ 150], 00:19:21.759 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 180], 99.95th=[ 186], 00:19:21.759 | 99.99th=[ 205] 00:19:21.759 bw ( KiB/s): min=115712, max=455680, per=10.36%, avg=168060.40, stdev=78840.06, samples=20 00:19:21.759 iops : min= 452, max= 1780, avg=656.45, stdev=307.95, samples=20 00:19:21.759 lat (msec) : 10=0.03%, 20=2.37%, 50=10.36%, 100=42.76%, 250=44.48% 00:19:21.759 cpu : usr=0.29%, sys=1.98%, ctx=1287, majf=0, minf=4097 00:19:21.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.759 issued rwts: total=6630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.759 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.759 job5: (groupid=0, jobs=1): err= 0: pid=80539: Thu Nov 21 02:36:00 2024 00:19:21.759 read: IOPS=559, BW=140MiB/s (147MB/s)(1413MiB/10100msec) 00:19:21.759 slat (usec): min=20, max=65129, avg=1745.35, stdev=6127.71 00:19:21.759 clat (msec): min=10, max=249, avg=112.49, stdev=35.43 00:19:21.759 lat (msec): min=10, max=258, avg=114.24, stdev=36.31 00:19:21.759 clat percentiles (msec): 00:19:21.759 | 1.00th=[ 20], 5.00th=[ 40], 10.00th=[ 64], 20.00th=[ 90], 00:19:21.759 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 114], 60.00th=[ 126], 00:19:21.759 | 70.00th=[ 136], 80.00th=[ 144], 90.00th=[ 155], 95.00th=[ 161], 00:19:21.759 | 99.00th=[ 180], 99.50th=[ 209], 99.90th=[ 249], 99.95th=[ 249], 00:19:21.759 | 99.99th=[ 249] 00:19:21.759 bw ( KiB/s): min=100864, max=327680, per=8.81%, avg=143041.40, stdev=49814.27, samples=20 00:19:21.759 iops : min= 394, max= 1280, avg=558.70, stdev=194.59, samples=20 00:19:21.759 lat (msec) : 20=1.04%, 50=6.90%, 100=26.12%, 250=65.93% 00:19:21.759 cpu : usr=0.17%, sys=1.69%, ctx=1040, majf=0, minf=4097 00:19:21.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.760 issued rwts: total=5650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.760 job6: (groupid=0, jobs=1): err= 0: pid=80540: Thu Nov 21 02:36:00 2024 00:19:21.760 read: IOPS=612, BW=153MiB/s (161MB/s)(1543MiB/10077msec) 00:19:21.760 slat (usec): min=15, max=59953, avg=1520.06, stdev=5326.02 00:19:21.760 clat (msec): min=24, max=188, avg=102.81, stdev=30.56 00:19:21.760 lat (msec): min=25, max=200, avg=104.33, stdev=31.21 00:19:21.760 clat percentiles (msec): 00:19:21.760 | 1.00th=[ 56], 5.00th=[ 62], 10.00th=[ 66], 20.00th=[ 73], 00:19:21.760 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 102], 60.00th=[ 118], 00:19:21.760 | 70.00th=[ 127], 80.00th=[ 134], 90.00th=[ 142], 95.00th=[ 148], 00:19:21.760 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 188], 00:19:21.760 | 99.99th=[ 188] 00:19:21.760 bw ( KiB/s): min=108544, max=223297, per=9.63%, avg=156311.90, stdev=42086.87, samples=20 00:19:21.760 iops : min= 424, max= 872, avg=610.50, stdev=164.38, samples=20 00:19:21.760 lat (msec) : 50=0.31%, 100=49.22%, 250=50.47% 00:19:21.760 cpu : usr=0.23%, sys=1.84%, ctx=1197, majf=0, minf=4097 00:19:21.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.760 issued rwts: total=6170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.760 job7: (groupid=0, jobs=1): err= 0: pid=80541: Thu Nov 21 02:36:00 2024 00:19:21.760 read: IOPS=609, BW=152MiB/s (160MB/s)(1536MiB/10078msec) 00:19:21.760 slat (usec): min=14, max=132257, avg=1591.69, stdev=6009.32 00:19:21.760 clat (usec): min=1894, max=230366, avg=103190.46, stdev=35764.48 00:19:21.760 lat (usec): min=1939, max=291506, avg=104782.15, stdev=36622.89 00:19:21.760 clat percentiles (msec): 00:19:21.760 | 1.00th=[ 11], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 73], 00:19:21.760 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 101], 60.00th=[ 114], 00:19:21.760 | 70.00th=[ 129], 80.00th=[ 138], 90.00th=[ 148], 95.00th=[ 159], 00:19:21.760 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 203], 99.95th=[ 209], 00:19:21.760 | 99.99th=[ 230] 00:19:21.760 bw ( KiB/s): min=96768, max=278528, per=9.59%, avg=155635.10, stdev=50429.32, samples=20 00:19:21.760 iops : min= 378, max= 1088, avg=607.85, stdev=197.04, samples=20 00:19:21.760 lat (msec) : 2=0.02%, 4=0.10%, 10=0.85%, 20=1.95%, 50=1.63% 00:19:21.760 lat (msec) : 100=45.46%, 250=50.00% 00:19:21.760 cpu : usr=0.21%, sys=1.85%, ctx=1349, majf=0, minf=4097 00:19:21.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.760 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.760 job8: (groupid=0, jobs=1): err= 0: pid=80542: Thu Nov 21 02:36:00 2024 00:19:21.760 read: IOPS=542, BW=136MiB/s (142MB/s)(1366MiB/10073msec) 00:19:21.760 slat (usec): min=15, max=61728, avg=1762.23, stdev=5911.25 00:19:21.760 clat (msec): min=32, max=197, avg=116.05, stdev=23.66 00:19:21.760 lat (msec): min=32, max=204, avg=117.81, stdev=24.43 00:19:21.760 clat percentiles (msec): 00:19:21.760 | 1.00th=[ 59], 5.00th=[ 73], 10.00th=[ 80], 20.00th=[ 99], 00:19:21.760 | 30.00th=[ 107], 40.00th=[ 112], 50.00th=[ 118], 60.00th=[ 125], 00:19:21.760 | 70.00th=[ 129], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 150], 00:19:21.760 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 186], 99.95th=[ 197], 00:19:21.760 | 99.99th=[ 199] 00:19:21.760 bw ( KiB/s): min=113152, max=195072, per=8.52%, avg=138253.05, stdev=21361.65, samples=20 00:19:21.760 iops : min= 442, max= 762, avg=540.05, stdev=83.44, samples=20 00:19:21.760 lat (msec) : 50=0.11%, 100=21.74%, 250=78.15% 00:19:21.760 cpu : usr=0.18%, sys=1.76%, ctx=1187, majf=0, minf=4097 00:19:21.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.760 issued rwts: total=5465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.760 job9: (groupid=0, jobs=1): err= 0: pid=80543: Thu Nov 21 02:36:00 2024 00:19:21.760 read: IOPS=563, BW=141MiB/s (148MB/s)(1416MiB/10053msec) 00:19:21.760 slat (usec): min=20, max=133254, avg=1718.01, stdev=6063.33 00:19:21.760 clat (msec): min=40, max=202, avg=111.66, stdev=26.39 00:19:21.760 lat (msec): min=40, max=320, avg=113.38, stdev=27.30 00:19:21.760 clat percentiles (msec): 00:19:21.760 | 1.00th=[ 55], 5.00th=[ 66], 10.00th=[ 73], 20.00th=[ 90], 00:19:21.760 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 113], 60.00th=[ 121], 00:19:21.760 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 142], 95.00th=[ 150], 00:19:21.760 | 99.00th=[ 184], 99.50th=[ 199], 99.90th=[ 203], 99.95th=[ 203], 00:19:21.760 | 99.99th=[ 203] 00:19:21.760 bw ( KiB/s): min=111904, max=222208, per=8.84%, avg=143400.95, stdev=28737.08, samples=20 00:19:21.760 iops : min= 437, max= 868, avg=560.10, stdev=112.30, samples=20 00:19:21.760 lat (msec) : 50=0.42%, 100=29.66%, 250=69.92% 00:19:21.760 cpu : usr=0.24%, sys=1.80%, ctx=1209, majf=0, minf=4097 00:19:21.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.760 issued rwts: total=5665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.760 job10: (groupid=0, jobs=1): err= 0: pid=80544: Thu Nov 21 02:36:00 2024 00:19:21.760 read: IOPS=562, BW=141MiB/s (147MB/s)(1422MiB/10108msec) 00:19:21.760 slat (usec): min=14, max=79235, avg=1709.09, stdev=6182.00 00:19:21.760 clat (msec): min=4, max=268, avg=111.85, stdev=38.41 00:19:21.760 lat (msec): min=4, max=268, avg=113.56, stdev=39.30 00:19:21.760 clat percentiles (msec): 00:19:21.760 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 44], 20.00th=[ 88], 00:19:21.760 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 115], 60.00th=[ 127], 00:19:21.760 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 161], 00:19:21.760 | 99.00th=[ 184], 99.50th=[ 197], 99.90th=[ 268], 99.95th=[ 268], 00:19:21.760 | 99.99th=[ 268] 00:19:21.760 bw ( KiB/s): min=100864, max=265728, per=8.87%, avg=143956.90, stdev=46803.30, samples=20 00:19:21.760 iops : min= 394, max= 1038, avg=562.25, stdev=182.85, samples=20 00:19:21.760 lat (msec) : 10=0.09%, 50=10.60%, 100=23.84%, 250=65.22%, 500=0.25% 00:19:21.760 cpu : usr=0.12%, sys=1.74%, ctx=1176, majf=0, minf=4097 00:19:21.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:21.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:21.760 issued rwts: total=5687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.760 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.760 00:19:21.760 Run status group 0 (all jobs): 00:19:21.760 READ: bw=1585MiB/s (1662MB/s), 119MiB/s-173MiB/s (125MB/s-182MB/s), io=15.6GiB (16.8GB), run=10053-10108msec 00:19:21.760 00:19:21.760 Disk stats (read/write): 00:19:21.760 nvme0n1: ios=11307/0, merge=0/0, ticks=1244135/0, in_queue=1244135, util=97.68% 00:19:21.760 nvme10n1: ios=13852/0, merge=0/0, ticks=1240601/0, in_queue=1240601, util=97.39% 00:19:21.760 nvme1n1: ios=10257/0, merge=0/0, ticks=1239734/0, in_queue=1239734, util=97.81% 00:19:21.760 nvme2n1: ios=9519/0, merge=0/0, ticks=1238536/0, in_queue=1238536, util=97.93% 00:19:21.760 nvme3n1: ios=13143/0, merge=0/0, ticks=1239037/0, in_queue=1239037, util=98.00% 00:19:21.760 nvme4n1: ios=11178/0, merge=0/0, ticks=1236638/0, in_queue=1236638, util=98.09% 00:19:21.760 nvme5n1: ios=12259/0, merge=0/0, ticks=1240378/0, in_queue=1240378, util=98.04% 00:19:21.760 nvme6n1: ios=12167/0, merge=0/0, ticks=1237938/0, in_queue=1237938, util=98.26% 00:19:21.760 nvme7n1: ios=10803/0, merge=0/0, ticks=1237877/0, in_queue=1237877, util=98.51% 00:19:21.760 nvme8n1: ios=11205/0, merge=0/0, ticks=1242279/0, in_queue=1242279, util=98.87% 00:19:21.760 nvme9n1: ios=11269/0, merge=0/0, ticks=1235919/0, in_queue=1235919, util=98.58% 00:19:21.760 02:36:00 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:21.761 [global] 00:19:21.761 thread=1 00:19:21.761 invalidate=1 00:19:21.761 rw=randwrite 00:19:21.761 time_based=1 00:19:21.761 runtime=10 00:19:21.761 ioengine=libaio 00:19:21.761 direct=1 00:19:21.761 bs=262144 00:19:21.761 iodepth=64 00:19:21.761 norandommap=1 00:19:21.761 numjobs=1 00:19:21.761 00:19:21.761 [job0] 00:19:21.761 filename=/dev/nvme0n1 00:19:21.761 [job1] 00:19:21.761 filename=/dev/nvme10n1 00:19:21.761 [job2] 00:19:21.761 filename=/dev/nvme1n1 00:19:21.761 [job3] 00:19:21.761 filename=/dev/nvme2n1 00:19:21.761 [job4] 00:19:21.761 filename=/dev/nvme3n1 00:19:21.761 [job5] 00:19:21.761 filename=/dev/nvme4n1 00:19:21.761 [job6] 00:19:21.761 filename=/dev/nvme5n1 00:19:21.761 [job7] 00:19:21.761 filename=/dev/nvme6n1 00:19:21.761 [job8] 00:19:21.761 filename=/dev/nvme7n1 00:19:21.761 [job9] 00:19:21.761 filename=/dev/nvme8n1 00:19:21.761 [job10] 00:19:21.761 filename=/dev/nvme9n1 00:19:21.761 Could not set queue depth (nvme0n1) 00:19:21.761 Could not set queue depth (nvme10n1) 00:19:21.761 Could not set queue depth (nvme1n1) 00:19:21.761 Could not set queue depth (nvme2n1) 00:19:21.761 Could not set queue depth (nvme3n1) 00:19:21.761 Could not set queue depth (nvme4n1) 00:19:21.761 Could not set queue depth (nvme5n1) 00:19:21.761 Could not set queue depth (nvme6n1) 00:19:21.761 Could not set queue depth (nvme7n1) 00:19:21.761 Could not set queue depth (nvme8n1) 00:19:21.761 Could not set queue depth (nvme9n1) 00:19:21.761 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:21.761 fio-3.35 00:19:21.761 Starting 11 threads 00:19:31.747 00:19:31.747 job0: (groupid=0, jobs=1): err= 0: pid=80739: Thu Nov 21 02:36:11 2024 00:19:31.747 write: IOPS=337, BW=84.4MiB/s (88.5MB/s)(860MiB/10183msec); 0 zone resets 00:19:31.747 slat (usec): min=19, max=74930, avg=2904.38, stdev=5152.71 00:19:31.747 clat (msec): min=3, max=358, avg=186.57, stdev=24.69 00:19:31.747 lat (msec): min=3, max=358, avg=189.47, stdev=24.51 00:19:31.747 clat percentiles (msec): 00:19:31.747 | 1.00th=[ 52], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:19:31.747 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:19:31.747 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 205], 00:19:31.747 | 99.00th=[ 257], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 359], 00:19:31.747 | 99.99th=[ 359] 00:19:31.747 bw ( KiB/s): min=82084, max=90292, per=7.79%, avg=86477.85, stdev=1847.52, samples=20 00:19:31.747 iops : min= 320, max= 352, avg=337.35, stdev= 7.20, samples=20 00:19:31.747 lat (msec) : 4=0.03%, 10=0.17%, 20=0.35%, 50=0.35%, 100=0.81% 00:19:31.747 lat (msec) : 250=97.18%, 500=1.11% 00:19:31.747 cpu : usr=0.76%, sys=1.02%, ctx=3788, majf=0, minf=1 00:19:31.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.747 issued rwts: total=0,3438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.747 job1: (groupid=0, jobs=1): err= 0: pid=80740: Thu Nov 21 02:36:11 2024 00:19:31.747 write: IOPS=498, BW=125MiB/s (131MB/s)(1261MiB/10114msec); 0 zone resets 00:19:31.747 slat (usec): min=23, max=31535, avg=1978.07, stdev=3396.18 00:19:31.747 clat (msec): min=2, max=238, avg=126.35, stdev=15.87 00:19:31.747 lat (msec): min=3, max=238, avg=128.33, stdev=15.75 00:19:31.747 clat percentiles (msec): 00:19:31.747 | 1.00th=[ 79], 5.00th=[ 93], 10.00th=[ 97], 20.00th=[ 123], 00:19:31.747 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:19:31.747 | 70.00th=[ 132], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 140], 00:19:31.747 | 99.00th=[ 150], 99.50th=[ 190], 99.90th=[ 230], 99.95th=[ 230], 00:19:31.747 | 99.99th=[ 239] 00:19:31.747 bw ( KiB/s): min=114688, max=167089, per=11.47%, avg=127432.95, stdev=11641.29, samples=20 00:19:31.747 iops : min= 448, max= 652, avg=497.75, stdev=45.35, samples=20 00:19:31.747 lat (msec) : 4=0.04%, 20=0.16%, 100=10.27%, 250=89.53% 00:19:31.747 cpu : usr=1.30%, sys=1.72%, ctx=5336, majf=0, minf=1 00:19:31.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.747 issued rwts: total=0,5042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.747 job2: (groupid=0, jobs=1): err= 0: pid=80752: Thu Nov 21 02:36:11 2024 00:19:31.747 write: IOPS=480, BW=120MiB/s (126MB/s)(1216MiB/10121msec); 0 zone resets 00:19:31.747 slat (usec): min=19, max=41416, avg=2017.50, stdev=3574.73 00:19:31.747 clat (usec): min=1658, max=246879, avg=131156.37, stdev=23955.80 00:19:31.747 lat (msec): min=3, max=246, avg=133.17, stdev=24.09 00:19:31.747 clat percentiles (msec): 00:19:31.747 | 1.00th=[ 12], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 125], 00:19:31.747 | 30.00th=[ 129], 40.00th=[ 130], 50.00th=[ 131], 60.00th=[ 131], 00:19:31.747 | 70.00th=[ 133], 80.00th=[ 136], 90.00th=[ 140], 95.00th=[ 178], 00:19:31.747 | 99.00th=[ 205], 99.50th=[ 211], 99.90th=[ 239], 99.95th=[ 239], 00:19:31.747 | 99.99th=[ 247] 00:19:31.747 bw ( KiB/s): min=83800, max=143872, per=11.07%, avg=122981.15, stdev=10727.19, samples=20 00:19:31.747 iops : min= 327, max= 562, avg=479.75, stdev=41.91, samples=20 00:19:31.747 lat (msec) : 2=0.02%, 4=0.12%, 10=0.70%, 20=0.31%, 50=0.76% 00:19:31.747 lat (msec) : 100=1.48%, 250=96.61% 00:19:31.747 cpu : usr=1.49%, sys=1.29%, ctx=5684, majf=0, minf=1 00:19:31.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.747 issued rwts: total=0,4862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.747 job3: (groupid=0, jobs=1): err= 0: pid=80753: Thu Nov 21 02:36:11 2024 00:19:31.747 write: IOPS=380, BW=95.1MiB/s (99.7MB/s)(964MiB/10142msec); 0 zone resets 00:19:31.747 slat (usec): min=19, max=62320, avg=2587.36, stdev=4560.52 00:19:31.747 clat (msec): min=24, max=299, avg=165.62, stdev=19.15 00:19:31.747 lat (msec): min=24, max=299, avg=168.21, stdev=18.89 00:19:31.747 clat percentiles (msec): 00:19:31.747 | 1.00th=[ 134], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 159], 00:19:31.747 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 165], 00:19:31.747 | 70.00th=[ 165], 80.00th=[ 171], 90.00th=[ 178], 95.00th=[ 205], 00:19:31.747 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 288], 99.95th=[ 300], 00:19:31.747 | 99.99th=[ 300] 00:19:31.747 bw ( KiB/s): min=72704, max=102400, per=8.75%, avg=97126.40, stdev=7345.01, samples=20 00:19:31.747 iops : min= 284, max= 400, avg=379.40, stdev=28.69, samples=20 00:19:31.747 lat (msec) : 50=0.26%, 100=0.52%, 250=98.76%, 500=0.47% 00:19:31.747 cpu : usr=1.04%, sys=1.23%, ctx=3195, majf=0, minf=1 00:19:31.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:31.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.747 issued rwts: total=0,3857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.747 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.747 job4: (groupid=0, jobs=1): err= 0: pid=80754: Thu Nov 21 02:36:11 2024 00:19:31.747 write: IOPS=379, BW=94.8MiB/s (99.4MB/s)(961MiB/10129msec); 0 zone resets 00:19:31.747 slat (usec): min=17, max=50930, avg=2596.79, stdev=4636.03 00:19:31.747 clat (msec): min=53, max=288, avg=166.07, stdev=19.61 00:19:31.747 lat (msec): min=53, max=288, avg=168.67, stdev=19.35 00:19:31.747 clat percentiles (msec): 00:19:31.747 | 1.00th=[ 146], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 157], 00:19:31.747 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 163], 00:19:31.747 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 218], 00:19:31.747 | 99.00th=[ 245], 99.50th=[ 251], 99.90th=[ 279], 99.95th=[ 288], 00:19:31.747 | 99.99th=[ 288] 00:19:31.747 bw ( KiB/s): min=69632, max=102912, per=8.71%, avg=96722.70, stdev=9004.37, samples=20 00:19:31.747 iops : min= 272, max= 402, avg=377.80, stdev=35.17, samples=20 00:19:31.747 lat (msec) : 100=0.42%, 250=98.98%, 500=0.60% 00:19:31.747 cpu : usr=1.14%, sys=1.19%, ctx=2886, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,3842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 job5: (groupid=0, jobs=1): err= 0: pid=80755: Thu Nov 21 02:36:11 2024 00:19:31.748 write: IOPS=337, BW=84.5MiB/s (88.6MB/s)(860MiB/10186msec); 0 zone resets 00:19:31.748 slat (usec): min=17, max=52545, avg=2902.82, stdev=5003.70 00:19:31.748 clat (msec): min=25, max=356, avg=186.46, stdev=19.99 00:19:31.748 lat (msec): min=25, max=356, avg=189.36, stdev=19.62 00:19:31.748 clat percentiles (msec): 00:19:31.748 | 1.00th=[ 130], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:19:31.748 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:19:31.748 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 194], 00:19:31.748 | 99.00th=[ 255], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 355], 00:19:31.748 | 99.99th=[ 355] 00:19:31.748 bw ( KiB/s): min=83968, max=90112, per=7.79%, avg=86459.60, stdev=1831.83, samples=20 00:19:31.748 iops : min= 328, max= 352, avg=337.70, stdev= 7.17, samples=20 00:19:31.748 lat (msec) : 50=0.46%, 100=0.46%, 250=97.97%, 500=1.10% 00:19:31.748 cpu : usr=0.71%, sys=1.08%, ctx=5130, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,3441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 job6: (groupid=0, jobs=1): err= 0: pid=80756: Thu Nov 21 02:36:11 2024 00:19:31.748 write: IOPS=384, BW=96.1MiB/s (101MB/s)(976MiB/10150msec); 0 zone resets 00:19:31.748 slat (usec): min=20, max=29235, avg=2557.77, stdev=4378.64 00:19:31.748 clat (msec): min=20, max=303, avg=163.81, stdev=16.55 00:19:31.748 lat (msec): min=20, max=303, avg=166.36, stdev=16.19 00:19:31.748 clat percentiles (msec): 00:19:31.748 | 1.00th=[ 130], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 159], 00:19:31.748 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 163], 00:19:31.748 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 188], 00:19:31.748 | 99.00th=[ 203], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 305], 00:19:31.748 | 99.99th=[ 305] 00:19:31.748 bw ( KiB/s): min=85504, max=102605, per=8.86%, avg=98407.80, stdev=4925.95, samples=20 00:19:31.748 iops : min= 334, max= 400, avg=383.90, stdev=19.10, samples=20 00:19:31.748 lat (msec) : 50=0.28%, 100=0.51%, 250=98.64%, 500=0.56% 00:19:31.748 cpu : usr=1.20%, sys=1.20%, ctx=6184, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 job7: (groupid=0, jobs=1): err= 0: pid=80757: Thu Nov 21 02:36:11 2024 00:19:31.748 write: IOPS=336, BW=84.2MiB/s (88.3MB/s)(857MiB/10181msec); 0 zone resets 00:19:31.748 slat (usec): min=19, max=50896, avg=2911.37, stdev=5115.10 00:19:31.748 clat (msec): min=53, max=360, avg=187.02, stdev=19.49 00:19:31.748 lat (msec): min=53, max=360, avg=189.93, stdev=19.07 00:19:31.748 clat percentiles (msec): 00:19:31.748 | 1.00th=[ 108], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:19:31.748 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:19:31.748 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 199], 00:19:31.748 | 99.00th=[ 259], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 363], 00:19:31.748 | 99.99th=[ 363] 00:19:31.748 bw ( KiB/s): min=81920, max=88064, per=7.76%, avg=86152.15, stdev=1809.78, samples=20 00:19:31.748 iops : min= 320, max= 344, avg=336.50, stdev= 7.05, samples=20 00:19:31.748 lat (msec) : 100=0.82%, 250=98.08%, 500=1.11% 00:19:31.748 cpu : usr=0.57%, sys=0.95%, ctx=3465, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,3429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 job8: (groupid=0, jobs=1): err= 0: pid=80758: Thu Nov 21 02:36:11 2024 00:19:31.748 write: IOPS=380, BW=95.2MiB/s (99.8MB/s)(966MiB/10145msec); 0 zone resets 00:19:31.748 slat (usec): min=26, max=53539, avg=2583.18, stdev=4552.19 00:19:31.748 clat (msec): min=20, max=304, avg=165.46, stdev=19.95 00:19:31.748 lat (msec): min=20, max=304, avg=168.04, stdev=19.72 00:19:31.748 clat percentiles (msec): 00:19:31.748 | 1.00th=[ 122], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 157], 00:19:31.748 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 163], 00:19:31.748 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 211], 00:19:31.748 | 99.00th=[ 230], 99.50th=[ 255], 99.90th=[ 296], 99.95th=[ 305], 00:19:31.748 | 99.99th=[ 305] 00:19:31.748 bw ( KiB/s): min=73216, max=102400, per=8.76%, avg=97254.40, stdev=7808.23, samples=20 00:19:31.748 iops : min= 286, max= 400, avg=379.90, stdev=30.50, samples=20 00:19:31.748 lat (msec) : 50=0.28%, 100=0.52%, 250=98.63%, 500=0.57% 00:19:31.748 cpu : usr=1.25%, sys=1.20%, ctx=5798, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 job9: (groupid=0, jobs=1): err= 0: pid=80759: Thu Nov 21 02:36:11 2024 00:19:31.748 write: IOPS=339, BW=85.0MiB/s (89.1MB/s)(865MiB/10184msec); 0 zone resets 00:19:31.748 slat (usec): min=19, max=91844, avg=2828.00, stdev=5094.07 00:19:31.748 clat (msec): min=79, max=357, avg=185.39, stdev=18.80 00:19:31.748 lat (msec): min=83, max=357, avg=188.22, stdev=18.50 00:19:31.748 clat percentiles (msec): 00:19:31.748 | 1.00th=[ 106], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:19:31.748 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 188], 00:19:31.748 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 192], 95.00th=[ 194], 00:19:31.748 | 99.00th=[ 255], 99.50th=[ 313], 99.90th=[ 347], 99.95th=[ 359], 00:19:31.748 | 99.99th=[ 359] 00:19:31.748 bw ( KiB/s): min=77824, max=93184, per=7.83%, avg=86988.85, stdev=3290.36, samples=20 00:19:31.748 iops : min= 304, max= 364, avg=339.75, stdev=12.86, samples=20 00:19:31.748 lat (msec) : 100=0.55%, 250=98.35%, 500=1.10% 00:19:31.748 cpu : usr=0.70%, sys=1.09%, ctx=5715, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,3461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 job10: (groupid=0, jobs=1): err= 0: pid=80761: Thu Nov 21 02:36:11 2024 00:19:31.748 write: IOPS=499, BW=125MiB/s (131MB/s)(1263MiB/10110msec); 0 zone resets 00:19:31.748 slat (usec): min=25, max=36160, avg=1974.83, stdev=3395.46 00:19:31.748 clat (msec): min=9, max=239, avg=126.10, stdev=17.10 00:19:31.748 lat (msec): min=9, max=239, avg=128.08, stdev=17.04 00:19:31.748 clat percentiles (msec): 00:19:31.748 | 1.00th=[ 52], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 123], 00:19:31.748 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 131], 00:19:31.748 | 70.00th=[ 132], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 140], 00:19:31.748 | 99.00th=[ 150], 99.50th=[ 192], 99.90th=[ 232], 99.95th=[ 232], 00:19:31.748 | 99.99th=[ 241] 00:19:31.748 bw ( KiB/s): min=114688, max=173568, per=11.50%, avg=127667.20, stdev=12919.92, samples=20 00:19:31.748 iops : min= 448, max= 678, avg=498.70, stdev=50.47, samples=20 00:19:31.748 lat (msec) : 10=0.04%, 20=0.08%, 50=0.79%, 100=9.37%, 250=89.72% 00:19:31.748 cpu : usr=1.46%, sys=1.39%, ctx=6688, majf=0, minf=1 00:19:31.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:31.748 issued rwts: total=0,5050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.748 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:31.748 00:19:31.748 Run status group 0 (all jobs): 00:19:31.748 WRITE: bw=1085MiB/s (1137MB/s), 84.2MiB/s-125MiB/s (88.3MB/s-131MB/s), io=10.8GiB (11.6GB), run=10110-10186msec 00:19:31.748 00:19:31.748 Disk stats (read/write): 00:19:31.748 nvme0n1: ios=49/6751, merge=0/0, ticks=62/1213150, in_queue=1213212, util=97.90% 00:19:31.748 nvme10n1: ios=49/9942, merge=0/0, ticks=34/1213106, in_queue=1213140, util=97.81% 00:19:31.748 nvme1n1: ios=33/9602, merge=0/0, ticks=17/1215279, in_queue=1215296, util=98.00% 00:19:31.748 nvme2n1: ios=0/7581, merge=0/0, ticks=0/1212367, in_queue=1212367, util=97.87% 00:19:31.748 nvme3n1: ios=0/7539, merge=0/0, ticks=0/1210121, in_queue=1210121, util=97.77% 00:19:31.748 nvme4n1: ios=5/6738, merge=0/0, ticks=1/1209226, in_queue=1209227, util=98.06% 00:19:31.748 nvme5n1: ios=0/7682, merge=0/0, ticks=0/1213692, in_queue=1213692, util=98.44% 00:19:31.748 nvme6n1: ios=0/6719, merge=0/0, ticks=0/1210105, in_queue=1210105, util=98.27% 00:19:31.748 nvme7n1: ios=0/7602, merge=0/0, ticks=0/1213339, in_queue=1213339, util=98.74% 00:19:31.748 nvme8n1: ios=0/6779, merge=0/0, ticks=0/1210714, in_queue=1210714, util=98.63% 00:19:31.748 nvme9n1: ios=0/9962, merge=0/0, ticks=0/1212795, in_queue=1212795, util=98.76% 00:19:31.748 02:36:11 -- target/multiconnection.sh@36 -- # sync 00:19:31.748 02:36:11 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:31.748 02:36:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.748 02:36:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:31.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.748 02:36:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:31.748 02:36:11 -- common/autotest_common.sh@1208 -- # local i=0 00:19:31.748 02:36:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:31.749 02:36:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:31.749 02:36:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:31.749 02:36:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:31.749 02:36:11 -- common/autotest_common.sh@1220 -- # return 0 00:19:31.749 02:36:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.749 02:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.749 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:19:31.749 02:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.749 02:36:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.749 02:36:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:31.749 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:31.749 02:36:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:31.749 02:36:11 -- common/autotest_common.sh@1208 -- # local i=0 00:19:31.749 02:36:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:31.749 02:36:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:31.749 02:36:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:31.749 02:36:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:31.749 02:36:11 -- common/autotest_common.sh@1220 -- # return 0 00:19:31.749 02:36:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:31.749 02:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.749 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:19:31.749 02:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.749 02:36:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.749 02:36:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:31.749 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:31.749 02:36:11 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:31.749 02:36:11 -- common/autotest_common.sh@1208 -- # local i=0 00:19:31.749 02:36:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:31.749 02:36:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:31.749 02:36:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:31.749 02:36:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:31.749 02:36:11 -- common/autotest_common.sh@1220 -- # return 0 00:19:31.749 02:36:11 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:31.749 02:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.749 02:36:11 -- common/autotest_common.sh@10 -- # set +x 00:19:31.749 02:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.749 02:36:11 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.749 02:36:11 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:31.749 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:31.749 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:31.749 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:31.749 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:31.749 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:31.749 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:31.749 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:31.749 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:31.749 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:31.749 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.749 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:31.749 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.749 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.749 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:31.749 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:31.749 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:31.749 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:31.749 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:31.749 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:31.749 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:31.749 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:31.749 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:31.749 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:31.749 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.749 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:31.749 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.749 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:31.749 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:32.008 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:32.008 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:32.008 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.008 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.008 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:32.008 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:32.008 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.008 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.008 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:32.008 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.008 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:32.008 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.008 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.008 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:32.008 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:32.008 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:32.008 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.008 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.008 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:32.008 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:32.008 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.008 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.008 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:32.008 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.008 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:32.008 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.008 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.008 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:32.008 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:32.008 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:32.008 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.008 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.008 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:32.008 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.008 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:32.008 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.008 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:32.008 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.008 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:32.008 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.008 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.008 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:32.267 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:32.267 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:32.267 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.267 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.267 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:32.267 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.267 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:32.267 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.267 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:32.267 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.267 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:32.267 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.267 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.267 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:32.267 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:32.267 02:36:12 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:32.267 02:36:12 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.267 02:36:12 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.267 02:36:12 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:32.267 02:36:12 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.267 02:36:12 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:32.526 02:36:12 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.526 02:36:12 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:32.526 02:36:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.526 02:36:12 -- common/autotest_common.sh@10 -- # set +x 00:19:32.526 02:36:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.526 02:36:12 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.526 02:36:12 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:32.526 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:32.526 02:36:13 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:32.527 02:36:13 -- common/autotest_common.sh@1208 -- # local i=0 00:19:32.527 02:36:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:32.527 02:36:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:32.527 02:36:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:32.527 02:36:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:32.527 02:36:13 -- common/autotest_common.sh@1220 -- # return 0 00:19:32.527 02:36:13 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:32.527 02:36:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.527 02:36:13 -- common/autotest_common.sh@10 -- # set +x 00:19:32.527 02:36:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.527 02:36:13 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:32.527 02:36:13 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:32.527 02:36:13 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:32.527 02:36:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.527 02:36:13 -- nvmf/common.sh@116 -- # sync 00:19:32.527 02:36:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.527 02:36:13 -- nvmf/common.sh@119 -- # set +e 00:19:32.527 02:36:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.527 02:36:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.527 rmmod nvme_tcp 00:19:32.527 rmmod nvme_fabrics 00:19:32.527 rmmod nvme_keyring 00:19:32.527 02:36:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.786 02:36:13 -- nvmf/common.sh@123 -- # set -e 00:19:32.786 02:36:13 -- nvmf/common.sh@124 -- # return 0 00:19:32.786 02:36:13 -- nvmf/common.sh@477 -- # '[' -n 80050 ']' 00:19:32.786 02:36:13 -- nvmf/common.sh@478 -- # killprocess 80050 00:19:32.786 02:36:13 -- common/autotest_common.sh@936 -- # '[' -z 80050 ']' 00:19:32.786 02:36:13 -- common/autotest_common.sh@940 -- # kill -0 80050 00:19:32.786 02:36:13 -- common/autotest_common.sh@941 -- # uname 00:19:32.786 02:36:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.786 02:36:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80050 00:19:32.786 killing process with pid 80050 00:19:32.786 02:36:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.786 02:36:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.786 02:36:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80050' 00:19:32.786 02:36:13 -- common/autotest_common.sh@955 -- # kill 80050 00:19:32.786 02:36:13 -- common/autotest_common.sh@960 -- # wait 80050 00:19:33.354 02:36:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:33.354 02:36:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:33.354 02:36:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:33.354 02:36:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.354 02:36:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:33.354 02:36:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.354 02:36:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.354 02:36:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.354 02:36:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:33.354 00:19:33.354 real 0m50.591s 00:19:33.354 user 2m53.972s 00:19:33.354 sys 0m21.945s 00:19:33.354 02:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:33.354 02:36:13 -- common/autotest_common.sh@10 -- # set +x 00:19:33.354 ************************************ 00:19:33.354 END TEST nvmf_multiconnection 00:19:33.354 ************************************ 00:19:33.354 02:36:13 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:33.354 02:36:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:33.354 02:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.354 02:36:13 -- common/autotest_common.sh@10 -- # set +x 00:19:33.354 ************************************ 00:19:33.354 START TEST nvmf_initiator_timeout 00:19:33.354 ************************************ 00:19:33.354 02:36:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:33.614 * Looking for test storage... 00:19:33.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:33.614 02:36:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:33.614 02:36:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:33.614 02:36:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:33.614 02:36:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:33.614 02:36:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:33.614 02:36:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:33.614 02:36:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:33.614 02:36:14 -- scripts/common.sh@335 -- # IFS=.-: 00:19:33.614 02:36:14 -- scripts/common.sh@335 -- # read -ra ver1 00:19:33.614 02:36:14 -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.614 02:36:14 -- scripts/common.sh@336 -- # read -ra ver2 00:19:33.614 02:36:14 -- scripts/common.sh@337 -- # local 'op=<' 00:19:33.614 02:36:14 -- scripts/common.sh@339 -- # ver1_l=2 00:19:33.614 02:36:14 -- scripts/common.sh@340 -- # ver2_l=1 00:19:33.614 02:36:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:33.614 02:36:14 -- scripts/common.sh@343 -- # case "$op" in 00:19:33.614 02:36:14 -- scripts/common.sh@344 -- # : 1 00:19:33.614 02:36:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:33.614 02:36:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.614 02:36:14 -- scripts/common.sh@364 -- # decimal 1 00:19:33.614 02:36:14 -- scripts/common.sh@352 -- # local d=1 00:19:33.614 02:36:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.614 02:36:14 -- scripts/common.sh@354 -- # echo 1 00:19:33.614 02:36:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:33.614 02:36:14 -- scripts/common.sh@365 -- # decimal 2 00:19:33.614 02:36:14 -- scripts/common.sh@352 -- # local d=2 00:19:33.614 02:36:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.614 02:36:14 -- scripts/common.sh@354 -- # echo 2 00:19:33.614 02:36:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:33.614 02:36:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:33.614 02:36:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:33.614 02:36:14 -- scripts/common.sh@367 -- # return 0 00:19:33.614 02:36:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.614 02:36:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.614 --rc genhtml_branch_coverage=1 00:19:33.614 --rc genhtml_function_coverage=1 00:19:33.614 --rc genhtml_legend=1 00:19:33.614 --rc geninfo_all_blocks=1 00:19:33.614 --rc geninfo_unexecuted_blocks=1 00:19:33.614 00:19:33.614 ' 00:19:33.614 02:36:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.614 --rc genhtml_branch_coverage=1 00:19:33.614 --rc genhtml_function_coverage=1 00:19:33.614 --rc genhtml_legend=1 00:19:33.614 --rc geninfo_all_blocks=1 00:19:33.614 --rc geninfo_unexecuted_blocks=1 00:19:33.614 00:19:33.614 ' 00:19:33.614 02:36:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.614 --rc genhtml_branch_coverage=1 00:19:33.614 --rc genhtml_function_coverage=1 00:19:33.614 --rc genhtml_legend=1 00:19:33.614 --rc geninfo_all_blocks=1 00:19:33.614 --rc geninfo_unexecuted_blocks=1 00:19:33.614 00:19:33.614 ' 00:19:33.614 02:36:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:33.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.614 --rc genhtml_branch_coverage=1 00:19:33.614 --rc genhtml_function_coverage=1 00:19:33.614 --rc genhtml_legend=1 00:19:33.614 --rc geninfo_all_blocks=1 00:19:33.614 --rc geninfo_unexecuted_blocks=1 00:19:33.614 00:19:33.614 ' 00:19:33.614 02:36:14 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.614 02:36:14 -- nvmf/common.sh@7 -- # uname -s 00:19:33.614 02:36:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.614 02:36:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.614 02:36:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.614 02:36:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.614 02:36:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.614 02:36:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.614 02:36:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.614 02:36:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.614 02:36:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.614 02:36:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.614 02:36:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:19:33.614 02:36:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:19:33.614 02:36:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.614 02:36:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.614 02:36:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.614 02:36:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.614 02:36:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.614 02:36:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.614 02:36:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.614 02:36:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.615 02:36:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.615 02:36:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.615 02:36:14 -- paths/export.sh@5 -- # export PATH 00:19:33.615 02:36:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.615 02:36:14 -- nvmf/common.sh@46 -- # : 0 00:19:33.615 02:36:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.615 02:36:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.615 02:36:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.615 02:36:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.615 02:36:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.615 02:36:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.615 02:36:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.615 02:36:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.615 02:36:14 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.615 02:36:14 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.615 02:36:14 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:33.615 02:36:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:33.615 02:36:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.615 02:36:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.615 02:36:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.615 02:36:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.615 02:36:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.615 02:36:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.615 02:36:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.615 02:36:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:33.615 02:36:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:33.615 02:36:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:33.615 02:36:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:33.615 02:36:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:33.615 02:36:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:33.615 02:36:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.615 02:36:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.615 02:36:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:33.615 02:36:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:33.615 02:36:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.615 02:36:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.615 02:36:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.615 02:36:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.615 02:36:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.615 02:36:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.615 02:36:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.615 02:36:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.615 02:36:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:33.615 02:36:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:33.615 Cannot find device "nvmf_tgt_br" 00:19:33.615 02:36:14 -- nvmf/common.sh@154 -- # true 00:19:33.615 02:36:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.615 Cannot find device "nvmf_tgt_br2" 00:19:33.615 02:36:14 -- nvmf/common.sh@155 -- # true 00:19:33.615 02:36:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:33.615 02:36:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:33.615 Cannot find device "nvmf_tgt_br" 00:19:33.615 02:36:14 -- nvmf/common.sh@157 -- # true 00:19:33.615 02:36:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:33.874 Cannot find device "nvmf_tgt_br2" 00:19:33.874 02:36:14 -- nvmf/common.sh@158 -- # true 00:19:33.874 02:36:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:33.874 02:36:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:33.874 02:36:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.874 02:36:14 -- nvmf/common.sh@161 -- # true 00:19:33.874 02:36:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.874 02:36:14 -- nvmf/common.sh@162 -- # true 00:19:33.874 02:36:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.874 02:36:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.874 02:36:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.874 02:36:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.874 02:36:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.874 02:36:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.874 02:36:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.874 02:36:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:33.874 02:36:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:33.874 02:36:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:33.874 02:36:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:33.874 02:36:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:33.874 02:36:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:33.874 02:36:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.874 02:36:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.874 02:36:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.874 02:36:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:33.874 02:36:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:33.874 02:36:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.874 02:36:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.874 02:36:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.874 02:36:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.874 02:36:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.874 02:36:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:33.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:19:33.874 00:19:33.874 --- 10.0.0.2 ping statistics --- 00:19:33.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.874 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:33.874 02:36:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:33.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:19:33.874 00:19:33.874 --- 10.0.0.3 ping statistics --- 00:19:33.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.874 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:33.874 02:36:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:33.874 00:19:33.874 --- 10.0.0.1 ping statistics --- 00:19:33.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.874 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:33.874 02:36:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.874 02:36:14 -- nvmf/common.sh@421 -- # return 0 00:19:33.874 02:36:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:33.874 02:36:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.874 02:36:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:33.874 02:36:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:33.874 02:36:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.874 02:36:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:33.874 02:36:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:34.137 02:36:14 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:34.137 02:36:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:34.137 02:36:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.137 02:36:14 -- common/autotest_common.sh@10 -- # set +x 00:19:34.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.137 02:36:14 -- nvmf/common.sh@469 -- # nvmfpid=81144 00:19:34.137 02:36:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:34.137 02:36:14 -- nvmf/common.sh@470 -- # waitforlisten 81144 00:19:34.137 02:36:14 -- common/autotest_common.sh@829 -- # '[' -z 81144 ']' 00:19:34.137 02:36:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.137 02:36:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.137 02:36:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.138 02:36:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.138 02:36:14 -- common/autotest_common.sh@10 -- # set +x 00:19:34.138 [2024-11-21 02:36:14.582471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:34.138 [2024-11-21 02:36:14.582716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.138 [2024-11-21 02:36:14.715051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.399 [2024-11-21 02:36:14.804579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:34.399 [2024-11-21 02:36:14.804750] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.399 [2024-11-21 02:36:14.804766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.399 [2024-11-21 02:36:14.804791] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.400 [2024-11-21 02:36:14.804904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.400 [2024-11-21 02:36:14.804993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.400 [2024-11-21 02:36:14.805122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.400 [2024-11-21 02:36:14.805125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.338 02:36:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.338 02:36:15 -- common/autotest_common.sh@862 -- # return 0 00:19:35.338 02:36:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:35.338 02:36:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 02:36:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:35.338 02:36:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 Malloc0 00:19:35.338 02:36:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:35.338 02:36:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 Delay0 00:19:35.338 02:36:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:35.338 02:36:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 [2024-11-21 02:36:15.732008] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.338 02:36:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:35.338 02:36:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 02:36:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.338 02:36:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 02:36:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.338 02:36:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.338 02:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:35.338 [2024-11-21 02:36:15.760239] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.338 02:36:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:35.338 02:36:15 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:35.338 02:36:15 -- common/autotest_common.sh@1187 -- # local i=0 00:19:35.338 02:36:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:35.338 02:36:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:35.338 02:36:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:37.872 02:36:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:37.872 02:36:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:37.872 02:36:17 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:37.872 02:36:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:37.872 02:36:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:37.872 02:36:17 -- common/autotest_common.sh@1197 -- # return 0 00:19:37.872 02:36:17 -- target/initiator_timeout.sh@35 -- # fio_pid=81227 00:19:37.872 02:36:17 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:37.872 02:36:17 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:37.872 [global] 00:19:37.872 thread=1 00:19:37.872 invalidate=1 00:19:37.872 rw=write 00:19:37.872 time_based=1 00:19:37.872 runtime=60 00:19:37.872 ioengine=libaio 00:19:37.872 direct=1 00:19:37.872 bs=4096 00:19:37.872 iodepth=1 00:19:37.872 norandommap=0 00:19:37.872 numjobs=1 00:19:37.872 00:19:37.872 verify_dump=1 00:19:37.872 verify_backlog=512 00:19:37.872 verify_state_save=0 00:19:37.872 do_verify=1 00:19:37.872 verify=crc32c-intel 00:19:37.872 [job0] 00:19:37.872 filename=/dev/nvme0n1 00:19:37.872 Could not set queue depth (nvme0n1) 00:19:37.872 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:37.872 fio-3.35 00:19:37.872 Starting 1 thread 00:19:40.406 02:36:20 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:40.406 02:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.406 02:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 true 00:19:40.406 02:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.406 02:36:20 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:40.406 02:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.406 02:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 true 00:19:40.406 02:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.406 02:36:20 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:40.406 02:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.406 02:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 true 00:19:40.406 02:36:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.406 02:36:20 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:40.406 02:36:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.406 02:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:40.406 true 00:19:40.406 02:36:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.406 02:36:21 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:43.696 02:36:24 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:43.696 02:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.696 02:36:24 -- common/autotest_common.sh@10 -- # set +x 00:19:43.696 true 00:19:43.696 02:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.696 02:36:24 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:43.696 02:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.696 02:36:24 -- common/autotest_common.sh@10 -- # set +x 00:19:43.696 true 00:19:43.696 02:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.696 02:36:24 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:43.696 02:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.696 02:36:24 -- common/autotest_common.sh@10 -- # set +x 00:19:43.696 true 00:19:43.696 02:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.696 02:36:24 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:43.696 02:36:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.696 02:36:24 -- common/autotest_common.sh@10 -- # set +x 00:19:43.696 true 00:19:43.696 02:36:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.696 02:36:24 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:43.696 02:36:24 -- target/initiator_timeout.sh@54 -- # wait 81227 00:20:40.008 00:20:40.008 job0: (groupid=0, jobs=1): err= 0: pid=81248: Thu Nov 21 02:37:18 2024 00:20:40.008 read: IOPS=808, BW=3233KiB/s (3311kB/s)(189MiB/60000msec) 00:20:40.008 slat (usec): min=11, max=11183, avg=14.31, stdev=61.57 00:20:40.008 clat (usec): min=150, max=40480k, avg=1039.10, stdev=183817.44 00:20:40.008 lat (usec): min=169, max=40480k, avg=1053.41, stdev=183817.44 00:20:40.008 clat percentiles (usec): 00:20:40.008 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:20:40.008 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:20:40.008 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 239], 00:20:40.008 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 578], 99.95th=[ 693], 00:20:40.008 | 99.99th=[ 1074] 00:20:40.008 write: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec); 0 zone resets 00:20:40.008 slat (usec): min=15, max=663, avg=19.97, stdev= 5.77 00:20:40.008 clat (usec): min=116, max=1215, avg=160.19, stdev=24.84 00:20:40.008 lat (usec): min=141, max=1236, avg=180.17, stdev=26.13 00:20:40.008 clat percentiles (usec): 00:20:40.008 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:20:40.008 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:20:40.008 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 190], 00:20:40.008 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 469], 99.95th=[ 603], 00:20:40.008 | 99.99th=[ 988] 00:20:40.008 bw ( KiB/s): min= 4096, max=12136, per=100.00%, avg=9766.08, stdev=1745.67, samples=39 00:20:40.008 iops : min= 1024, max= 3034, avg=2441.51, stdev=436.42, samples=39 00:20:40.008 lat (usec) : 250=98.65%, 500=1.23%, 750=0.09%, 1000=0.02% 00:20:40.008 lat (msec) : 2=0.01%, 50=0.01%, >=2000=0.01% 00:20:40.008 cpu : usr=0.54%, sys=2.01%, ctx=97211, majf=0, minf=5 00:20:40.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.008 issued rwts: total=48495,48640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:40.008 00:20:40.008 Run status group 0 (all jobs): 00:20:40.008 READ: bw=3233KiB/s (3311kB/s), 3233KiB/s-3233KiB/s (3311kB/s-3311kB/s), io=189MiB (199MB), run=60000-60000msec 00:20:40.008 WRITE: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:20:40.008 00:20:40.008 Disk stats (read/write): 00:20:40.008 nvme0n1: ios=48400/48537, merge=0/0, ticks=10282/8295, in_queue=18577, util=99.79% 00:20:40.008 02:37:18 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:40.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:40.008 02:37:18 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:40.008 02:37:18 -- common/autotest_common.sh@1208 -- # local i=0 00:20:40.008 02:37:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:40.008 02:37:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.008 02:37:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:40.008 02:37:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.009 02:37:18 -- common/autotest_common.sh@1220 -- # return 0 00:20:40.009 02:37:18 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:40.009 nvmf hotplug test: fio successful as expected 00:20:40.009 02:37:18 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:40.009 02:37:18 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.009 02:37:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.009 02:37:18 -- common/autotest_common.sh@10 -- # set +x 00:20:40.009 02:37:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.009 02:37:18 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:40.009 02:37:18 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:40.009 02:37:18 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:40.009 02:37:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:40.009 02:37:18 -- nvmf/common.sh@116 -- # sync 00:20:40.009 02:37:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:40.009 02:37:18 -- nvmf/common.sh@119 -- # set +e 00:20:40.009 02:37:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:40.009 02:37:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:40.009 rmmod nvme_tcp 00:20:40.009 rmmod nvme_fabrics 00:20:40.009 rmmod nvme_keyring 00:20:40.009 02:37:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:40.009 02:37:18 -- nvmf/common.sh@123 -- # set -e 00:20:40.009 02:37:18 -- nvmf/common.sh@124 -- # return 0 00:20:40.009 02:37:18 -- nvmf/common.sh@477 -- # '[' -n 81144 ']' 00:20:40.009 02:37:18 -- nvmf/common.sh@478 -- # killprocess 81144 00:20:40.009 02:37:18 -- common/autotest_common.sh@936 -- # '[' -z 81144 ']' 00:20:40.009 02:37:18 -- common/autotest_common.sh@940 -- # kill -0 81144 00:20:40.009 02:37:18 -- common/autotest_common.sh@941 -- # uname 00:20:40.009 02:37:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.009 02:37:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81144 00:20:40.009 02:37:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:40.009 02:37:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:40.009 killing process with pid 81144 00:20:40.009 02:37:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81144' 00:20:40.009 02:37:18 -- common/autotest_common.sh@955 -- # kill 81144 00:20:40.009 02:37:18 -- common/autotest_common.sh@960 -- # wait 81144 00:20:40.009 02:37:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:40.009 02:37:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:40.009 02:37:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:40.009 02:37:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.009 02:37:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:40.009 02:37:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.009 02:37:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.009 02:37:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.009 02:37:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:40.009 00:20:40.009 real 1m4.968s 00:20:40.009 user 4m8.628s 00:20:40.009 sys 0m7.435s 00:20:40.009 02:37:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:40.009 02:37:18 -- common/autotest_common.sh@10 -- # set +x 00:20:40.009 ************************************ 00:20:40.009 END TEST nvmf_initiator_timeout 00:20:40.009 ************************************ 00:20:40.009 02:37:18 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:40.009 02:37:18 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:40.009 02:37:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.009 02:37:18 -- common/autotest_common.sh@10 -- # set +x 00:20:40.009 02:37:19 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:40.009 02:37:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.009 02:37:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.009 02:37:19 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:40.009 02:37:19 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:40.009 02:37:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:40.009 02:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.009 02:37:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.009 ************************************ 00:20:40.009 START TEST nvmf_multicontroller 00:20:40.009 ************************************ 00:20:40.009 02:37:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:40.009 * Looking for test storage... 00:20:40.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.009 02:37:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:40.009 02:37:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:40.009 02:37:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:40.009 02:37:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:40.009 02:37:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:40.009 02:37:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:40.009 02:37:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:40.009 02:37:19 -- scripts/common.sh@335 -- # IFS=.-: 00:20:40.009 02:37:19 -- scripts/common.sh@335 -- # read -ra ver1 00:20:40.009 02:37:19 -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.009 02:37:19 -- scripts/common.sh@336 -- # read -ra ver2 00:20:40.009 02:37:19 -- scripts/common.sh@337 -- # local 'op=<' 00:20:40.009 02:37:19 -- scripts/common.sh@339 -- # ver1_l=2 00:20:40.009 02:37:19 -- scripts/common.sh@340 -- # ver2_l=1 00:20:40.009 02:37:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:40.009 02:37:19 -- scripts/common.sh@343 -- # case "$op" in 00:20:40.009 02:37:19 -- scripts/common.sh@344 -- # : 1 00:20:40.009 02:37:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:40.009 02:37:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.009 02:37:19 -- scripts/common.sh@364 -- # decimal 1 00:20:40.009 02:37:19 -- scripts/common.sh@352 -- # local d=1 00:20:40.009 02:37:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.009 02:37:19 -- scripts/common.sh@354 -- # echo 1 00:20:40.009 02:37:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:40.009 02:37:19 -- scripts/common.sh@365 -- # decimal 2 00:20:40.009 02:37:19 -- scripts/common.sh@352 -- # local d=2 00:20:40.009 02:37:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.009 02:37:19 -- scripts/common.sh@354 -- # echo 2 00:20:40.009 02:37:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:40.009 02:37:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:40.009 02:37:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:40.009 02:37:19 -- scripts/common.sh@367 -- # return 0 00:20:40.009 02:37:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.009 02:37:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.009 --rc genhtml_branch_coverage=1 00:20:40.009 --rc genhtml_function_coverage=1 00:20:40.009 --rc genhtml_legend=1 00:20:40.009 --rc geninfo_all_blocks=1 00:20:40.009 --rc geninfo_unexecuted_blocks=1 00:20:40.009 00:20:40.009 ' 00:20:40.009 02:37:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.009 --rc genhtml_branch_coverage=1 00:20:40.009 --rc genhtml_function_coverage=1 00:20:40.009 --rc genhtml_legend=1 00:20:40.009 --rc geninfo_all_blocks=1 00:20:40.009 --rc geninfo_unexecuted_blocks=1 00:20:40.009 00:20:40.009 ' 00:20:40.009 02:37:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.009 --rc genhtml_branch_coverage=1 00:20:40.009 --rc genhtml_function_coverage=1 00:20:40.009 --rc genhtml_legend=1 00:20:40.009 --rc geninfo_all_blocks=1 00:20:40.009 --rc geninfo_unexecuted_blocks=1 00:20:40.009 00:20:40.009 ' 00:20:40.009 02:37:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:40.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.009 --rc genhtml_branch_coverage=1 00:20:40.009 --rc genhtml_function_coverage=1 00:20:40.009 --rc genhtml_legend=1 00:20:40.009 --rc geninfo_all_blocks=1 00:20:40.009 --rc geninfo_unexecuted_blocks=1 00:20:40.009 00:20:40.009 ' 00:20:40.009 02:37:19 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.009 02:37:19 -- nvmf/common.sh@7 -- # uname -s 00:20:40.009 02:37:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.009 02:37:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.009 02:37:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.009 02:37:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.009 02:37:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.009 02:37:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.009 02:37:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.009 02:37:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.009 02:37:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.009 02:37:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.009 02:37:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:40.009 02:37:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:40.009 02:37:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.009 02:37:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.009 02:37:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.009 02:37:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.009 02:37:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.009 02:37:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.009 02:37:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.009 02:37:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.009 02:37:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.009 02:37:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.009 02:37:19 -- paths/export.sh@5 -- # export PATH 00:20:40.009 02:37:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.009 02:37:19 -- nvmf/common.sh@46 -- # : 0 00:20:40.009 02:37:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:40.009 02:37:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:40.009 02:37:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:40.009 02:37:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.009 02:37:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.009 02:37:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:40.009 02:37:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:40.009 02:37:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:40.009 02:37:19 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.009 02:37:19 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.009 02:37:19 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:40.009 02:37:19 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:40.009 02:37:19 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.009 02:37:19 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:40.009 02:37:19 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:40.009 02:37:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:40.009 02:37:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.009 02:37:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:40.009 02:37:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:40.009 02:37:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:40.009 02:37:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.009 02:37:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.009 02:37:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.009 02:37:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:40.009 02:37:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:40.009 02:37:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:40.009 02:37:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:40.009 02:37:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:40.009 02:37:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:40.009 02:37:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.009 02:37:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.009 02:37:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:40.009 02:37:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:40.009 02:37:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.009 02:37:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.009 02:37:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.009 02:37:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.009 02:37:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.009 02:37:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.009 02:37:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.009 02:37:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.009 02:37:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:40.010 02:37:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:40.010 Cannot find device "nvmf_tgt_br" 00:20:40.010 02:37:19 -- nvmf/common.sh@154 -- # true 00:20:40.010 02:37:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.010 Cannot find device "nvmf_tgt_br2" 00:20:40.010 02:37:19 -- nvmf/common.sh@155 -- # true 00:20:40.010 02:37:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:40.010 02:37:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:40.010 Cannot find device "nvmf_tgt_br" 00:20:40.010 02:37:19 -- nvmf/common.sh@157 -- # true 00:20:40.010 02:37:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:40.010 Cannot find device "nvmf_tgt_br2" 00:20:40.010 02:37:19 -- nvmf/common.sh@158 -- # true 00:20:40.010 02:37:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:40.010 02:37:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:40.010 02:37:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.010 02:37:19 -- nvmf/common.sh@161 -- # true 00:20:40.010 02:37:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.010 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.010 02:37:19 -- nvmf/common.sh@162 -- # true 00:20:40.010 02:37:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.010 02:37:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.010 02:37:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.010 02:37:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.010 02:37:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.010 02:37:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.010 02:37:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.010 02:37:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.010 02:37:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.010 02:37:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:40.010 02:37:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:40.010 02:37:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:40.010 02:37:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:40.010 02:37:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.010 02:37:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.010 02:37:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.010 02:37:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:40.010 02:37:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:40.010 02:37:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.010 02:37:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.010 02:37:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.010 02:37:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.010 02:37:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.010 02:37:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:40.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:20:40.010 00:20:40.010 --- 10.0.0.2 ping statistics --- 00:20:40.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.010 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:20:40.010 02:37:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:40.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:20:40.010 00:20:40.010 --- 10.0.0.3 ping statistics --- 00:20:40.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.010 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:40.010 02:37:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:20:40.010 00:20:40.010 --- 10.0.0.1 ping statistics --- 00:20:40.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.010 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:20:40.010 02:37:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.010 02:37:19 -- nvmf/common.sh@421 -- # return 0 00:20:40.010 02:37:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.010 02:37:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.010 02:37:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.010 02:37:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.010 02:37:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.010 02:37:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.010 02:37:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:40.010 02:37:19 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:40.010 02:37:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.010 02:37:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.010 02:37:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.010 02:37:19 -- nvmf/common.sh@469 -- # nvmfpid=82088 00:20:40.010 02:37:19 -- nvmf/common.sh@470 -- # waitforlisten 82088 00:20:40.010 02:37:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:40.010 02:37:19 -- common/autotest_common.sh@829 -- # '[' -z 82088 ']' 00:20:40.010 02:37:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.010 02:37:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.010 02:37:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.010 02:37:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.010 02:37:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.010 [2024-11-21 02:37:19.700885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.010 [2024-11-21 02:37:19.700972] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.010 [2024-11-21 02:37:19.836412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:40.010 [2024-11-21 02:37:19.912497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.010 [2024-11-21 02:37:19.912652] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.010 [2024-11-21 02:37:19.912665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.010 [2024-11-21 02:37:19.912674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.010 [2024-11-21 02:37:19.912858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.010 [2024-11-21 02:37:19.913608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.010 [2024-11-21 02:37:19.913653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.269 02:37:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.269 02:37:20 -- common/autotest_common.sh@862 -- # return 0 00:20:40.269 02:37:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:40.269 02:37:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 02:37:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.269 02:37:20 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 [2024-11-21 02:37:20.786116] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 Malloc0 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 [2024-11-21 02:37:20.853324] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 [2024-11-21 02:37:20.861217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 Malloc1 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.269 02:37:20 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:40.269 02:37:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.269 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:40.527 02:37:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.527 02:37:20 -- host/multicontroller.sh@44 -- # bdevperf_pid=82140 00:20:40.527 02:37:20 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.527 02:37:20 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:40.527 02:37:20 -- host/multicontroller.sh@47 -- # waitforlisten 82140 /var/tmp/bdevperf.sock 00:20:40.527 02:37:20 -- common/autotest_common.sh@829 -- # '[' -z 82140 ']' 00:20:40.527 02:37:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.527 02:37:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.527 02:37:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.527 02:37:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.527 02:37:20 -- common/autotest_common.sh@10 -- # set +x 00:20:41.464 02:37:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.464 02:37:21 -- common/autotest_common.sh@862 -- # return 0 00:20:41.464 02:37:21 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:41.464 02:37:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.464 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:20:41.464 NVMe0n1 00:20:41.464 02:37:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.464 02:37:21 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:41.464 02:37:21 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:41.464 02:37:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.464 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:20:41.464 02:37:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.464 1 00:20:41.464 02:37:21 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:41.464 02:37:21 -- common/autotest_common.sh@650 -- # local es=0 00:20:41.464 02:37:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:41.464 02:37:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.464 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.464 02:37:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.464 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.464 02:37:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:41.464 02:37:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.464 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:20:41.464 2024/11/21 02:37:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:41.464 request: 00:20:41.464 { 00:20:41.464 "method": "bdev_nvme_attach_controller", 00:20:41.464 "params": { 00:20:41.464 "name": "NVMe0", 00:20:41.464 "trtype": "tcp", 00:20:41.464 "traddr": "10.0.0.2", 00:20:41.464 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:41.464 "hostaddr": "10.0.0.2", 00:20:41.464 "hostsvcid": "60000", 00:20:41.464 "adrfam": "ipv4", 00:20:41.464 "trsvcid": "4420", 00:20:41.464 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:41.464 } 00:20:41.464 } 00:20:41.464 Got JSON-RPC error response 00:20:41.464 GoRPCClient: error on JSON-RPC call 00:20:41.464 02:37:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.464 02:37:21 -- common/autotest_common.sh@653 -- # es=1 00:20:41.464 02:37:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.464 02:37:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.464 02:37:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.464 02:37:21 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:41.464 02:37:21 -- common/autotest_common.sh@650 -- # local es=0 00:20:41.464 02:37:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:41.464 02:37:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.464 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.464 02:37:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.464 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.464 02:37:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:41.464 02:37:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.464 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 2024/11/21 02:37:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:41.465 request: 00:20:41.465 { 00:20:41.465 "method": "bdev_nvme_attach_controller", 00:20:41.465 "params": { 00:20:41.465 "name": "NVMe0", 00:20:41.465 "trtype": "tcp", 00:20:41.465 "traddr": "10.0.0.2", 00:20:41.465 "hostaddr": "10.0.0.2", 00:20:41.465 "hostsvcid": "60000", 00:20:41.465 "adrfam": "ipv4", 00:20:41.465 "trsvcid": "4420", 00:20:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:41.465 } 00:20:41.465 } 00:20:41.465 Got JSON-RPC error response 00:20:41.465 GoRPCClient: error on JSON-RPC call 00:20:41.465 02:37:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.465 02:37:21 -- common/autotest_common.sh@653 -- # es=1 00:20:41.465 02:37:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.465 02:37:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.465 02:37:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.465 02:37:21 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:41.465 02:37:21 -- common/autotest_common.sh@650 -- # local es=0 00:20:41.465 02:37:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:41.465 02:37:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.465 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.465 02:37:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.465 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.465 02:37:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:41.465 02:37:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.465 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 2024/11/21 02:37:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:41.465 request: 00:20:41.465 { 00:20:41.465 "method": "bdev_nvme_attach_controller", 00:20:41.465 "params": { 00:20:41.465 "name": "NVMe0", 00:20:41.465 "trtype": "tcp", 00:20:41.465 "traddr": "10.0.0.2", 00:20:41.465 "hostaddr": "10.0.0.2", 00:20:41.465 "hostsvcid": "60000", 00:20:41.465 "adrfam": "ipv4", 00:20:41.465 "trsvcid": "4420", 00:20:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.465 "multipath": "disable" 00:20:41.465 } 00:20:41.465 } 00:20:41.465 Got JSON-RPC error response 00:20:41.465 GoRPCClient: error on JSON-RPC call 00:20:41.465 02:37:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.465 02:37:21 -- common/autotest_common.sh@653 -- # es=1 00:20:41.465 02:37:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.465 02:37:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.465 02:37:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.465 02:37:21 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:41.465 02:37:21 -- common/autotest_common.sh@650 -- # local es=0 00:20:41.465 02:37:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:41.465 02:37:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:41.465 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.465 02:37:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:41.465 02:37:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.465 02:37:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:41.465 02:37:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.465 02:37:21 -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 2024/11/21 02:37:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:41.465 request: 00:20:41.465 { 00:20:41.465 "method": "bdev_nvme_attach_controller", 00:20:41.465 "params": { 00:20:41.465 "name": "NVMe0", 00:20:41.465 "trtype": "tcp", 00:20:41.465 "traddr": "10.0.0.2", 00:20:41.465 "hostaddr": "10.0.0.2", 00:20:41.465 "hostsvcid": "60000", 00:20:41.465 "adrfam": "ipv4", 00:20:41.465 "trsvcid": "4420", 00:20:41.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.465 "multipath": "failover" 00:20:41.465 } 00:20:41.465 } 00:20:41.465 Got JSON-RPC error response 00:20:41.465 GoRPCClient: error on JSON-RPC call 00:20:41.465 02:37:22 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:41.465 02:37:22 -- common/autotest_common.sh@653 -- # es=1 00:20:41.465 02:37:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.465 02:37:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.465 02:37:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.465 02:37:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.465 02:37:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.465 02:37:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 00:20:41.465 02:37:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.465 02:37:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.465 02:37:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.465 02:37:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 02:37:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.465 02:37:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:41.465 02:37:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.465 02:37:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.725 00:20:41.725 02:37:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.725 02:37:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:41.725 02:37:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.725 02:37:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:41.725 02:37:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.725 02:37:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.725 02:37:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:41.725 02:37:22 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.104 0 00:20:43.104 02:37:23 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:43.104 02:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.104 02:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:43.104 02:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.104 02:37:23 -- host/multicontroller.sh@100 -- # killprocess 82140 00:20:43.104 02:37:23 -- common/autotest_common.sh@936 -- # '[' -z 82140 ']' 00:20:43.104 02:37:23 -- common/autotest_common.sh@940 -- # kill -0 82140 00:20:43.104 02:37:23 -- common/autotest_common.sh@941 -- # uname 00:20:43.104 02:37:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.104 02:37:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82140 00:20:43.104 killing process with pid 82140 00:20:43.104 02:37:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:43.104 02:37:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:43.104 02:37:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82140' 00:20:43.104 02:37:23 -- common/autotest_common.sh@955 -- # kill 82140 00:20:43.104 02:37:23 -- common/autotest_common.sh@960 -- # wait 82140 00:20:43.104 02:37:23 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.104 02:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.104 02:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:43.104 02:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.104 02:37:23 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:43.104 02:37:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.104 02:37:23 -- common/autotest_common.sh@10 -- # set +x 00:20:43.104 02:37:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.104 02:37:23 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:43.104 02:37:23 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:43.104 02:37:23 -- common/autotest_common.sh@1607 -- # read -r file 00:20:43.104 02:37:23 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:43.104 02:37:23 -- common/autotest_common.sh@1606 -- # sort -u 00:20:43.104 02:37:23 -- common/autotest_common.sh@1608 -- # cat 00:20:43.104 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:43.104 [2024-11-21 02:37:20.980196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:43.104 [2024-11-21 02:37:20.980318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82140 ] 00:20:43.104 [2024-11-21 02:37:21.122243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.104 [2024-11-21 02:37:21.227178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.104 [2024-11-21 02:37:22.159399] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name dc1139e0-211b-4e62-b9e3-e15010643ae3 already exists 00:20:43.104 [2024-11-21 02:37:22.159455] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:dc1139e0-211b-4e62-b9e3-e15010643ae3 alias for bdev NVMe1n1 00:20:43.104 [2024-11-21 02:37:22.159479] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:43.104 Running I/O for 1 seconds... 00:20:43.104 00:20:43.104 Latency(us) 00:20:43.104 [2024-11-21T02:37:23.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.104 [2024-11-21T02:37:23.751Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:43.104 NVMe0n1 : 1.00 23987.59 93.70 0.00 0.00 5329.38 2993.80 11081.54 00:20:43.104 [2024-11-21T02:37:23.751Z] =================================================================================================================== 00:20:43.104 [2024-11-21T02:37:23.751Z] Total : 23987.59 93.70 0.00 0.00 5329.38 2993.80 11081.54 00:20:43.104 Received shutdown signal, test time was about 1.000000 seconds 00:20:43.104 00:20:43.104 Latency(us) 00:20:43.104 [2024-11-21T02:37:23.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.104 [2024-11-21T02:37:23.751Z] =================================================================================================================== 00:20:43.104 [2024-11-21T02:37:23.751Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.104 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:43.104 02:37:23 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:43.104 02:37:23 -- common/autotest_common.sh@1607 -- # read -r file 00:20:43.104 02:37:23 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:43.104 02:37:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:43.104 02:37:23 -- nvmf/common.sh@116 -- # sync 00:20:43.363 02:37:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:43.363 02:37:23 -- nvmf/common.sh@119 -- # set +e 00:20:43.363 02:37:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:43.363 02:37:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:43.363 rmmod nvme_tcp 00:20:43.363 rmmod nvme_fabrics 00:20:43.363 rmmod nvme_keyring 00:20:43.363 02:37:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:43.363 02:37:23 -- nvmf/common.sh@123 -- # set -e 00:20:43.363 02:37:23 -- nvmf/common.sh@124 -- # return 0 00:20:43.363 02:37:23 -- nvmf/common.sh@477 -- # '[' -n 82088 ']' 00:20:43.363 02:37:23 -- nvmf/common.sh@478 -- # killprocess 82088 00:20:43.363 02:37:23 -- common/autotest_common.sh@936 -- # '[' -z 82088 ']' 00:20:43.363 02:37:23 -- common/autotest_common.sh@940 -- # kill -0 82088 00:20:43.363 02:37:23 -- common/autotest_common.sh@941 -- # uname 00:20:43.363 02:37:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:43.363 02:37:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82088 00:20:43.363 killing process with pid 82088 00:20:43.363 02:37:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:43.363 02:37:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:43.363 02:37:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82088' 00:20:43.363 02:37:23 -- common/autotest_common.sh@955 -- # kill 82088 00:20:43.363 02:37:23 -- common/autotest_common.sh@960 -- # wait 82088 00:20:43.623 02:37:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:43.623 02:37:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:43.623 02:37:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:43.623 02:37:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.623 02:37:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:43.623 02:37:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.623 02:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.623 02:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.623 02:37:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:43.623 00:20:43.623 real 0m5.134s 00:20:43.623 user 0m15.689s 00:20:43.623 sys 0m1.188s 00:20:43.623 ************************************ 00:20:43.623 END TEST nvmf_multicontroller 00:20:43.623 ************************************ 00:20:43.623 02:37:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:43.623 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:20:43.623 02:37:24 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:43.623 02:37:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:43.623 02:37:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:43.623 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:20:43.623 ************************************ 00:20:43.623 START TEST nvmf_aer 00:20:43.623 ************************************ 00:20:43.623 02:37:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:43.884 * Looking for test storage... 00:20:43.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:43.884 02:37:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:43.884 02:37:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:43.884 02:37:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:43.884 02:37:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:43.884 02:37:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:43.884 02:37:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:43.884 02:37:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:43.884 02:37:24 -- scripts/common.sh@335 -- # IFS=.-: 00:20:43.884 02:37:24 -- scripts/common.sh@335 -- # read -ra ver1 00:20:43.884 02:37:24 -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.884 02:37:24 -- scripts/common.sh@336 -- # read -ra ver2 00:20:43.884 02:37:24 -- scripts/common.sh@337 -- # local 'op=<' 00:20:43.884 02:37:24 -- scripts/common.sh@339 -- # ver1_l=2 00:20:43.884 02:37:24 -- scripts/common.sh@340 -- # ver2_l=1 00:20:43.884 02:37:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:43.884 02:37:24 -- scripts/common.sh@343 -- # case "$op" in 00:20:43.884 02:37:24 -- scripts/common.sh@344 -- # : 1 00:20:43.884 02:37:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:43.884 02:37:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.884 02:37:24 -- scripts/common.sh@364 -- # decimal 1 00:20:43.884 02:37:24 -- scripts/common.sh@352 -- # local d=1 00:20:43.884 02:37:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.884 02:37:24 -- scripts/common.sh@354 -- # echo 1 00:20:43.884 02:37:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:43.884 02:37:24 -- scripts/common.sh@365 -- # decimal 2 00:20:43.884 02:37:24 -- scripts/common.sh@352 -- # local d=2 00:20:43.884 02:37:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.884 02:37:24 -- scripts/common.sh@354 -- # echo 2 00:20:43.884 02:37:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:43.884 02:37:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:43.884 02:37:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:43.884 02:37:24 -- scripts/common.sh@367 -- # return 0 00:20:43.884 02:37:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.884 02:37:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.884 --rc genhtml_branch_coverage=1 00:20:43.884 --rc genhtml_function_coverage=1 00:20:43.884 --rc genhtml_legend=1 00:20:43.884 --rc geninfo_all_blocks=1 00:20:43.884 --rc geninfo_unexecuted_blocks=1 00:20:43.884 00:20:43.884 ' 00:20:43.884 02:37:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.884 --rc genhtml_branch_coverage=1 00:20:43.884 --rc genhtml_function_coverage=1 00:20:43.884 --rc genhtml_legend=1 00:20:43.884 --rc geninfo_all_blocks=1 00:20:43.884 --rc geninfo_unexecuted_blocks=1 00:20:43.884 00:20:43.884 ' 00:20:43.884 02:37:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:43.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.884 --rc genhtml_branch_coverage=1 00:20:43.884 --rc genhtml_function_coverage=1 00:20:43.884 --rc genhtml_legend=1 00:20:43.884 --rc geninfo_all_blocks=1 00:20:43.884 --rc geninfo_unexecuted_blocks=1 00:20:43.884 00:20:43.884 ' 00:20:43.885 02:37:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:43.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.885 --rc genhtml_branch_coverage=1 00:20:43.885 --rc genhtml_function_coverage=1 00:20:43.885 --rc genhtml_legend=1 00:20:43.885 --rc geninfo_all_blocks=1 00:20:43.885 --rc geninfo_unexecuted_blocks=1 00:20:43.885 00:20:43.885 ' 00:20:43.885 02:37:24 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.885 02:37:24 -- nvmf/common.sh@7 -- # uname -s 00:20:43.885 02:37:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.885 02:37:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.885 02:37:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.885 02:37:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.885 02:37:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.885 02:37:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.885 02:37:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.885 02:37:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.885 02:37:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.885 02:37:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.885 02:37:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:43.885 02:37:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:43.885 02:37:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.885 02:37:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.885 02:37:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.885 02:37:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.885 02:37:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.885 02:37:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.885 02:37:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.885 02:37:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.885 02:37:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.885 02:37:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.885 02:37:24 -- paths/export.sh@5 -- # export PATH 00:20:43.885 02:37:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.885 02:37:24 -- nvmf/common.sh@46 -- # : 0 00:20:43.885 02:37:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:43.885 02:37:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:43.885 02:37:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:43.885 02:37:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.885 02:37:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.885 02:37:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:43.885 02:37:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:43.885 02:37:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:43.885 02:37:24 -- host/aer.sh@11 -- # nvmftestinit 00:20:43.885 02:37:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:43.885 02:37:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.885 02:37:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:43.885 02:37:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:43.885 02:37:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:43.885 02:37:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.885 02:37:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.885 02:37:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.885 02:37:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:43.885 02:37:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:43.885 02:37:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:43.885 02:37:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:43.885 02:37:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:43.885 02:37:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:43.885 02:37:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.885 02:37:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.885 02:37:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:43.885 02:37:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:43.885 02:37:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.885 02:37:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.885 02:37:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.885 02:37:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.885 02:37:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.885 02:37:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.885 02:37:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.885 02:37:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.885 02:37:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:43.885 02:37:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:43.885 Cannot find device "nvmf_tgt_br" 00:20:43.885 02:37:24 -- nvmf/common.sh@154 -- # true 00:20:43.885 02:37:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.885 Cannot find device "nvmf_tgt_br2" 00:20:43.885 02:37:24 -- nvmf/common.sh@155 -- # true 00:20:43.885 02:37:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:43.885 02:37:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:43.885 Cannot find device "nvmf_tgt_br" 00:20:43.885 02:37:24 -- nvmf/common.sh@157 -- # true 00:20:43.885 02:37:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:43.885 Cannot find device "nvmf_tgt_br2" 00:20:43.885 02:37:24 -- nvmf/common.sh@158 -- # true 00:20:43.885 02:37:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:44.144 02:37:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:44.144 02:37:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.144 02:37:24 -- nvmf/common.sh@161 -- # true 00:20:44.144 02:37:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.144 02:37:24 -- nvmf/common.sh@162 -- # true 00:20:44.144 02:37:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:44.144 02:37:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:44.144 02:37:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.144 02:37:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.144 02:37:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.144 02:37:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.144 02:37:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.144 02:37:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:44.144 02:37:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:44.144 02:37:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:44.144 02:37:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:44.144 02:37:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:44.144 02:37:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:44.144 02:37:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.144 02:37:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.144 02:37:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.144 02:37:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:44.144 02:37:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:44.144 02:37:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.144 02:37:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.144 02:37:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.144 02:37:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.144 02:37:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.144 02:37:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:44.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:44.144 00:20:44.144 --- 10.0.0.2 ping statistics --- 00:20:44.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.144 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:44.144 02:37:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:44.145 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.145 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:20:44.145 00:20:44.145 --- 10.0.0.3 ping statistics --- 00:20:44.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.145 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:44.145 02:37:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:20:44.145 00:20:44.145 --- 10.0.0.1 ping statistics --- 00:20:44.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.145 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:44.145 02:37:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.145 02:37:24 -- nvmf/common.sh@421 -- # return 0 00:20:44.145 02:37:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:44.145 02:37:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.145 02:37:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:44.145 02:37:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:44.145 02:37:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.145 02:37:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:44.145 02:37:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:44.145 02:37:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:44.145 02:37:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:44.145 02:37:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:44.145 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:20:44.145 02:37:24 -- nvmf/common.sh@469 -- # nvmfpid=82396 00:20:44.145 02:37:24 -- nvmf/common.sh@470 -- # waitforlisten 82396 00:20:44.145 02:37:24 -- common/autotest_common.sh@829 -- # '[' -z 82396 ']' 00:20:44.145 02:37:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.145 02:37:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.145 02:37:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.145 02:37:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.145 02:37:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.145 02:37:24 -- common/autotest_common.sh@10 -- # set +x 00:20:44.403 [2024-11-21 02:37:24.846658] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:44.403 [2024-11-21 02:37:24.847256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.403 [2024-11-21 02:37:24.986872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.662 [2024-11-21 02:37:25.099876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:44.662 [2024-11-21 02:37:25.100072] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.662 [2024-11-21 02:37:25.100090] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.662 [2024-11-21 02:37:25.100102] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.662 [2024-11-21 02:37:25.100241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.662 [2024-11-21 02:37:25.100968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.662 [2024-11-21 02:37:25.101050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.662 [2024-11-21 02:37:25.101060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.229 02:37:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.229 02:37:25 -- common/autotest_common.sh@862 -- # return 0 00:20:45.229 02:37:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:45.229 02:37:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.229 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 02:37:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.488 02:37:25 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.488 02:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.488 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 [2024-11-21 02:37:25.922071] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.488 02:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.488 02:37:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:45.488 02:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.488 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 Malloc0 00:20:45.488 02:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.488 02:37:25 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:45.488 02:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.488 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 02:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.488 02:37:25 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.488 02:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.488 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 02:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.488 02:37:25 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.488 02:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.488 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 [2024-11-21 02:37:25.994454] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.488 02:37:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.488 02:37:25 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:45.488 02:37:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.488 02:37:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.488 [2024-11-21 02:37:26.002128] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:45.488 [ 00:20:45.488 { 00:20:45.488 "allow_any_host": true, 00:20:45.488 "hosts": [], 00:20:45.488 "listen_addresses": [], 00:20:45.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:45.488 "subtype": "Discovery" 00:20:45.488 }, 00:20:45.488 { 00:20:45.488 "allow_any_host": true, 00:20:45.488 "hosts": [], 00:20:45.488 "listen_addresses": [ 00:20:45.488 { 00:20:45.488 "adrfam": "IPv4", 00:20:45.489 "traddr": "10.0.0.2", 00:20:45.489 "transport": "TCP", 00:20:45.489 "trsvcid": "4420", 00:20:45.489 "trtype": "TCP" 00:20:45.489 } 00:20:45.489 ], 00:20:45.489 "max_cntlid": 65519, 00:20:45.489 "max_namespaces": 2, 00:20:45.489 "min_cntlid": 1, 00:20:45.489 "model_number": "SPDK bdev Controller", 00:20:45.489 "namespaces": [ 00:20:45.489 { 00:20:45.489 "bdev_name": "Malloc0", 00:20:45.489 "name": "Malloc0", 00:20:45.489 "nguid": "1935B86D0BA7442980192B42673AD2EA", 00:20:45.489 "nsid": 1, 00:20:45.489 "uuid": "1935b86d-0ba7-4429-8019-2b42673ad2ea" 00:20:45.489 } 00:20:45.489 ], 00:20:45.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.489 "serial_number": "SPDK00000000000001", 00:20:45.489 "subtype": "NVMe" 00:20:45.489 } 00:20:45.489 ] 00:20:45.489 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.489 02:37:26 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:45.489 02:37:26 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:45.489 02:37:26 -- host/aer.sh@33 -- # aerpid=82450 00:20:45.489 02:37:26 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:45.489 02:37:26 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:45.489 02:37:26 -- common/autotest_common.sh@1254 -- # local i=0 00:20:45.489 02:37:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.489 02:37:26 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:45.489 02:37:26 -- common/autotest_common.sh@1257 -- # i=1 00:20:45.489 02:37:26 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:45.489 02:37:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.489 02:37:26 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:45.489 02:37:26 -- common/autotest_common.sh@1257 -- # i=2 00:20:45.489 02:37:26 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:45.748 02:37:26 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.748 02:37:26 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:45.748 02:37:26 -- common/autotest_common.sh@1265 -- # return 0 00:20:45.748 02:37:26 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:45.748 02:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.748 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.748 Malloc1 00:20:45.748 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.748 02:37:26 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:45.748 02:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.748 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.748 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.748 02:37:26 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:45.748 02:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.748 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.748 Asynchronous Event Request test 00:20:45.748 Attaching to 10.0.0.2 00:20:45.748 Attached to 10.0.0.2 00:20:45.748 Registering asynchronous event callbacks... 00:20:45.748 Starting namespace attribute notice tests for all controllers... 00:20:45.748 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:45.748 aer_cb - Changed Namespace 00:20:45.748 Cleaning up... 00:20:45.748 [ 00:20:45.748 { 00:20:45.748 "allow_any_host": true, 00:20:45.748 "hosts": [], 00:20:45.748 "listen_addresses": [], 00:20:45.748 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:45.748 "subtype": "Discovery" 00:20:45.748 }, 00:20:45.748 { 00:20:45.748 "allow_any_host": true, 00:20:45.748 "hosts": [], 00:20:45.748 "listen_addresses": [ 00:20:45.748 { 00:20:45.748 "adrfam": "IPv4", 00:20:45.748 "traddr": "10.0.0.2", 00:20:45.748 "transport": "TCP", 00:20:45.748 "trsvcid": "4420", 00:20:45.748 "trtype": "TCP" 00:20:45.748 } 00:20:45.748 ], 00:20:45.748 "max_cntlid": 65519, 00:20:45.748 "max_namespaces": 2, 00:20:45.748 "min_cntlid": 1, 00:20:45.748 "model_number": "SPDK bdev Controller", 00:20:45.748 "namespaces": [ 00:20:45.748 { 00:20:45.748 "bdev_name": "Malloc0", 00:20:45.748 "name": "Malloc0", 00:20:45.748 "nguid": "1935B86D0BA7442980192B42673AD2EA", 00:20:45.748 "nsid": 1, 00:20:45.748 "uuid": "1935b86d-0ba7-4429-8019-2b42673ad2ea" 00:20:45.748 }, 00:20:45.748 { 00:20:45.748 "bdev_name": "Malloc1", 00:20:45.748 "name": "Malloc1", 00:20:45.748 "nguid": "0A6DFB8FDCCE42A499F0B50B8595AE61", 00:20:45.748 "nsid": 2, 00:20:45.748 "uuid": "0a6dfb8f-dcce-42a4-99f0-b50b8595ae61" 00:20:45.748 } 00:20:45.748 ], 00:20:45.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.748 "serial_number": "SPDK00000000000001", 00:20:45.748 "subtype": "NVMe" 00:20:45.748 } 00:20:45.748 ] 00:20:45.748 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.748 02:37:26 -- host/aer.sh@43 -- # wait 82450 00:20:45.748 02:37:26 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:45.748 02:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.748 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.748 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.748 02:37:26 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:45.748 02:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.748 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:46.007 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.007 02:37:26 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.007 02:37:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.007 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:46.007 02:37:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.007 02:37:26 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:46.007 02:37:26 -- host/aer.sh@51 -- # nvmftestfini 00:20:46.007 02:37:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:46.007 02:37:26 -- nvmf/common.sh@116 -- # sync 00:20:46.007 02:37:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:46.007 02:37:26 -- nvmf/common.sh@119 -- # set +e 00:20:46.007 02:37:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:46.007 02:37:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:46.007 rmmod nvme_tcp 00:20:46.007 rmmod nvme_fabrics 00:20:46.007 rmmod nvme_keyring 00:20:46.007 02:37:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:46.007 02:37:26 -- nvmf/common.sh@123 -- # set -e 00:20:46.007 02:37:26 -- nvmf/common.sh@124 -- # return 0 00:20:46.007 02:37:26 -- nvmf/common.sh@477 -- # '[' -n 82396 ']' 00:20:46.007 02:37:26 -- nvmf/common.sh@478 -- # killprocess 82396 00:20:46.007 02:37:26 -- common/autotest_common.sh@936 -- # '[' -z 82396 ']' 00:20:46.007 02:37:26 -- common/autotest_common.sh@940 -- # kill -0 82396 00:20:46.007 02:37:26 -- common/autotest_common.sh@941 -- # uname 00:20:46.007 02:37:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.007 02:37:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82396 00:20:46.007 02:37:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:46.007 02:37:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:46.007 02:37:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82396' 00:20:46.007 killing process with pid 82396 00:20:46.007 02:37:26 -- common/autotest_common.sh@955 -- # kill 82396 00:20:46.007 [2024-11-21 02:37:26.555622] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:46.007 02:37:26 -- common/autotest_common.sh@960 -- # wait 82396 00:20:46.266 02:37:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:46.266 02:37:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:46.266 02:37:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:46.266 02:37:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.266 02:37:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:46.266 02:37:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.266 02:37:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.266 02:37:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.266 02:37:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:46.266 00:20:46.266 real 0m2.661s 00:20:46.266 user 0m7.136s 00:20:46.266 sys 0m0.772s 00:20:46.266 02:37:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:46.266 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:46.266 ************************************ 00:20:46.266 END TEST nvmf_aer 00:20:46.266 ************************************ 00:20:46.525 02:37:26 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:46.525 02:37:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:46.525 02:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.525 02:37:26 -- common/autotest_common.sh@10 -- # set +x 00:20:46.525 ************************************ 00:20:46.525 START TEST nvmf_async_init 00:20:46.525 ************************************ 00:20:46.525 02:37:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:46.525 * Looking for test storage... 00:20:46.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.525 02:37:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:46.525 02:37:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:46.525 02:37:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:46.525 02:37:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:46.525 02:37:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:46.525 02:37:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:46.525 02:37:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:46.525 02:37:27 -- scripts/common.sh@335 -- # IFS=.-: 00:20:46.525 02:37:27 -- scripts/common.sh@335 -- # read -ra ver1 00:20:46.525 02:37:27 -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.525 02:37:27 -- scripts/common.sh@336 -- # read -ra ver2 00:20:46.525 02:37:27 -- scripts/common.sh@337 -- # local 'op=<' 00:20:46.525 02:37:27 -- scripts/common.sh@339 -- # ver1_l=2 00:20:46.525 02:37:27 -- scripts/common.sh@340 -- # ver2_l=1 00:20:46.525 02:37:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:46.525 02:37:27 -- scripts/common.sh@343 -- # case "$op" in 00:20:46.525 02:37:27 -- scripts/common.sh@344 -- # : 1 00:20:46.525 02:37:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:46.525 02:37:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.525 02:37:27 -- scripts/common.sh@364 -- # decimal 1 00:20:46.525 02:37:27 -- scripts/common.sh@352 -- # local d=1 00:20:46.525 02:37:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.525 02:37:27 -- scripts/common.sh@354 -- # echo 1 00:20:46.525 02:37:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:46.525 02:37:27 -- scripts/common.sh@365 -- # decimal 2 00:20:46.525 02:37:27 -- scripts/common.sh@352 -- # local d=2 00:20:46.525 02:37:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.525 02:37:27 -- scripts/common.sh@354 -- # echo 2 00:20:46.525 02:37:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:46.525 02:37:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:46.525 02:37:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:46.525 02:37:27 -- scripts/common.sh@367 -- # return 0 00:20:46.525 02:37:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.525 02:37:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:46.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.525 --rc genhtml_branch_coverage=1 00:20:46.525 --rc genhtml_function_coverage=1 00:20:46.525 --rc genhtml_legend=1 00:20:46.525 --rc geninfo_all_blocks=1 00:20:46.525 --rc geninfo_unexecuted_blocks=1 00:20:46.525 00:20:46.525 ' 00:20:46.525 02:37:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:46.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.525 --rc genhtml_branch_coverage=1 00:20:46.525 --rc genhtml_function_coverage=1 00:20:46.525 --rc genhtml_legend=1 00:20:46.525 --rc geninfo_all_blocks=1 00:20:46.525 --rc geninfo_unexecuted_blocks=1 00:20:46.525 00:20:46.525 ' 00:20:46.525 02:37:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:46.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.525 --rc genhtml_branch_coverage=1 00:20:46.525 --rc genhtml_function_coverage=1 00:20:46.525 --rc genhtml_legend=1 00:20:46.525 --rc geninfo_all_blocks=1 00:20:46.525 --rc geninfo_unexecuted_blocks=1 00:20:46.525 00:20:46.525 ' 00:20:46.525 02:37:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:46.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.525 --rc genhtml_branch_coverage=1 00:20:46.525 --rc genhtml_function_coverage=1 00:20:46.525 --rc genhtml_legend=1 00:20:46.525 --rc geninfo_all_blocks=1 00:20:46.525 --rc geninfo_unexecuted_blocks=1 00:20:46.525 00:20:46.525 ' 00:20:46.525 02:37:27 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.525 02:37:27 -- nvmf/common.sh@7 -- # uname -s 00:20:46.525 02:37:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.525 02:37:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.525 02:37:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.525 02:37:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.525 02:37:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.525 02:37:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.525 02:37:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.525 02:37:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.525 02:37:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.525 02:37:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.525 02:37:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:46.525 02:37:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:46.525 02:37:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.525 02:37:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.525 02:37:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.525 02:37:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.525 02:37:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.525 02:37:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.525 02:37:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.526 02:37:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.526 02:37:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.526 02:37:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.526 02:37:27 -- paths/export.sh@5 -- # export PATH 00:20:46.526 02:37:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.526 02:37:27 -- nvmf/common.sh@46 -- # : 0 00:20:46.526 02:37:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:46.526 02:37:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:46.526 02:37:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:46.526 02:37:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.526 02:37:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.526 02:37:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:46.526 02:37:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:46.526 02:37:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:46.526 02:37:27 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:46.526 02:37:27 -- host/async_init.sh@14 -- # null_block_size=512 00:20:46.526 02:37:27 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:46.526 02:37:27 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:46.526 02:37:27 -- host/async_init.sh@20 -- # uuidgen 00:20:46.526 02:37:27 -- host/async_init.sh@20 -- # tr -d - 00:20:46.526 02:37:27 -- host/async_init.sh@20 -- # nguid=6308bc5f7e304e858e37f9f47056a368 00:20:46.526 02:37:27 -- host/async_init.sh@22 -- # nvmftestinit 00:20:46.526 02:37:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:46.526 02:37:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.526 02:37:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:46.526 02:37:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:46.526 02:37:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:46.526 02:37:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.526 02:37:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.526 02:37:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.526 02:37:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:46.526 02:37:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:46.526 02:37:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:46.526 02:37:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:46.526 02:37:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:46.526 02:37:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:46.526 02:37:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.526 02:37:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.526 02:37:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.526 02:37:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:46.526 02:37:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.526 02:37:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.526 02:37:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.526 02:37:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.526 02:37:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.526 02:37:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.526 02:37:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.526 02:37:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.526 02:37:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:46.785 02:37:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:46.785 Cannot find device "nvmf_tgt_br" 00:20:46.785 02:37:27 -- nvmf/common.sh@154 -- # true 00:20:46.785 02:37:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.785 Cannot find device "nvmf_tgt_br2" 00:20:46.785 02:37:27 -- nvmf/common.sh@155 -- # true 00:20:46.785 02:37:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:46.785 02:37:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:46.785 Cannot find device "nvmf_tgt_br" 00:20:46.785 02:37:27 -- nvmf/common.sh@157 -- # true 00:20:46.785 02:37:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:46.785 Cannot find device "nvmf_tgt_br2" 00:20:46.785 02:37:27 -- nvmf/common.sh@158 -- # true 00:20:46.785 02:37:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:46.785 02:37:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:46.785 02:37:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.785 02:37:27 -- nvmf/common.sh@161 -- # true 00:20:46.785 02:37:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.785 02:37:27 -- nvmf/common.sh@162 -- # true 00:20:46.785 02:37:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.785 02:37:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.785 02:37:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.785 02:37:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.785 02:37:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.785 02:37:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.786 02:37:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.786 02:37:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.786 02:37:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.786 02:37:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:46.786 02:37:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:46.786 02:37:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:46.786 02:37:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:46.786 02:37:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:47.043 02:37:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:47.043 02:37:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:47.043 02:37:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:47.043 02:37:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:47.043 02:37:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:47.043 02:37:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:47.043 02:37:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:47.043 02:37:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:47.043 02:37:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:47.043 02:37:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:47.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:20:47.043 00:20:47.043 --- 10.0.0.2 ping statistics --- 00:20:47.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.044 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:47.044 02:37:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:47.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:47.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:47.044 00:20:47.044 --- 10.0.0.3 ping statistics --- 00:20:47.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.044 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:47.044 02:37:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:47.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:47.044 00:20:47.044 --- 10.0.0.1 ping statistics --- 00:20:47.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.044 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:47.044 02:37:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.044 02:37:27 -- nvmf/common.sh@421 -- # return 0 00:20:47.044 02:37:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:47.044 02:37:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.044 02:37:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:47.044 02:37:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:47.044 02:37:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.044 02:37:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:47.044 02:37:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:47.044 02:37:27 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:47.044 02:37:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:47.044 02:37:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:47.044 02:37:27 -- common/autotest_common.sh@10 -- # set +x 00:20:47.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.044 02:37:27 -- nvmf/common.sh@469 -- # nvmfpid=82633 00:20:47.044 02:37:27 -- nvmf/common.sh@470 -- # waitforlisten 82633 00:20:47.044 02:37:27 -- common/autotest_common.sh@829 -- # '[' -z 82633 ']' 00:20:47.044 02:37:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.044 02:37:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:47.044 02:37:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.044 02:37:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.044 02:37:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.044 02:37:27 -- common/autotest_common.sh@10 -- # set +x 00:20:47.044 [2024-11-21 02:37:27.591064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:47.044 [2024-11-21 02:37:27.591159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.301 [2024-11-21 02:37:27.730785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.301 [2024-11-21 02:37:27.816684] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:47.301 [2024-11-21 02:37:27.817385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.302 [2024-11-21 02:37:27.817470] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.302 [2024-11-21 02:37:27.817540] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.302 [2024-11-21 02:37:27.817615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.237 02:37:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.237 02:37:28 -- common/autotest_common.sh@862 -- # return 0 00:20:48.237 02:37:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:48.237 02:37:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 02:37:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.237 02:37:28 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 [2024-11-21 02:37:28.569351] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 null0 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6308bc5f7e304e858e37f9f47056a368 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 [2024-11-21 02:37:28.617476] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 nvme0n1 00:20:48.237 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.237 02:37:28 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:48.237 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.237 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.237 [ 00:20:48.237 { 00:20:48.237 "aliases": [ 00:20:48.237 "6308bc5f-7e30-4e85-8e37-f9f47056a368" 00:20:48.237 ], 00:20:48.237 "assigned_rate_limits": { 00:20:48.237 "r_mbytes_per_sec": 0, 00:20:48.237 "rw_ios_per_sec": 0, 00:20:48.237 "rw_mbytes_per_sec": 0, 00:20:48.237 "w_mbytes_per_sec": 0 00:20:48.237 }, 00:20:48.237 "block_size": 512, 00:20:48.237 "claimed": false, 00:20:48.237 "driver_specific": { 00:20:48.237 "mp_policy": "active_passive", 00:20:48.237 "nvme": [ 00:20:48.237 { 00:20:48.237 "ctrlr_data": { 00:20:48.237 "ana_reporting": false, 00:20:48.237 "cntlid": 1, 00:20:48.238 "firmware_revision": "24.01.1", 00:20:48.238 "model_number": "SPDK bdev Controller", 00:20:48.238 "multi_ctrlr": true, 00:20:48.238 "oacs": { 00:20:48.238 "firmware": 0, 00:20:48.238 "format": 0, 00:20:48.238 "ns_manage": 0, 00:20:48.238 "security": 0 00:20:48.238 }, 00:20:48.238 "serial_number": "00000000000000000000", 00:20:48.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.238 "vendor_id": "0x8086" 00:20:48.238 }, 00:20:48.238 "ns_data": { 00:20:48.238 "can_share": true, 00:20:48.238 "id": 1 00:20:48.238 }, 00:20:48.238 "trid": { 00:20:48.238 "adrfam": "IPv4", 00:20:48.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.238 "traddr": "10.0.0.2", 00:20:48.238 "trsvcid": "4420", 00:20:48.238 "trtype": "TCP" 00:20:48.238 }, 00:20:48.238 "vs": { 00:20:48.238 "nvme_version": "1.3" 00:20:48.238 } 00:20:48.238 } 00:20:48.238 ] 00:20:48.238 }, 00:20:48.238 "name": "nvme0n1", 00:20:48.238 "num_blocks": 2097152, 00:20:48.238 "product_name": "NVMe disk", 00:20:48.238 "supported_io_types": { 00:20:48.238 "abort": true, 00:20:48.238 "compare": true, 00:20:48.238 "compare_and_write": true, 00:20:48.238 "flush": true, 00:20:48.238 "nvme_admin": true, 00:20:48.238 "nvme_io": true, 00:20:48.238 "read": true, 00:20:48.238 "reset": true, 00:20:48.238 "unmap": false, 00:20:48.238 "write": true, 00:20:48.238 "write_zeroes": true 00:20:48.238 }, 00:20:48.238 "uuid": "6308bc5f-7e30-4e85-8e37-f9f47056a368", 00:20:48.238 "zoned": false 00:20:48.238 } 00:20:48.238 ] 00:20:48.238 02:37:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.238 02:37:28 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:48.238 02:37:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.238 02:37:28 -- common/autotest_common.sh@10 -- # set +x 00:20:48.497 [2024-11-21 02:37:28.885638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:48.497 [2024-11-21 02:37:28.885884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f79f90 (9): Bad file descriptor 00:20:48.497 [2024-11-21 02:37:29.017845] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:48.497 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.497 02:37:29 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:48.497 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.497 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.497 [ 00:20:48.497 { 00:20:48.497 "aliases": [ 00:20:48.497 "6308bc5f-7e30-4e85-8e37-f9f47056a368" 00:20:48.497 ], 00:20:48.497 "assigned_rate_limits": { 00:20:48.497 "r_mbytes_per_sec": 0, 00:20:48.497 "rw_ios_per_sec": 0, 00:20:48.497 "rw_mbytes_per_sec": 0, 00:20:48.497 "w_mbytes_per_sec": 0 00:20:48.497 }, 00:20:48.497 "block_size": 512, 00:20:48.497 "claimed": false, 00:20:48.497 "driver_specific": { 00:20:48.497 "mp_policy": "active_passive", 00:20:48.497 "nvme": [ 00:20:48.497 { 00:20:48.497 "ctrlr_data": { 00:20:48.497 "ana_reporting": false, 00:20:48.497 "cntlid": 2, 00:20:48.497 "firmware_revision": "24.01.1", 00:20:48.497 "model_number": "SPDK bdev Controller", 00:20:48.497 "multi_ctrlr": true, 00:20:48.497 "oacs": { 00:20:48.497 "firmware": 0, 00:20:48.497 "format": 0, 00:20:48.497 "ns_manage": 0, 00:20:48.497 "security": 0 00:20:48.497 }, 00:20:48.497 "serial_number": "00000000000000000000", 00:20:48.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.497 "vendor_id": "0x8086" 00:20:48.497 }, 00:20:48.497 "ns_data": { 00:20:48.497 "can_share": true, 00:20:48.497 "id": 1 00:20:48.497 }, 00:20:48.497 "trid": { 00:20:48.497 "adrfam": "IPv4", 00:20:48.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.497 "traddr": "10.0.0.2", 00:20:48.497 "trsvcid": "4420", 00:20:48.497 "trtype": "TCP" 00:20:48.497 }, 00:20:48.497 "vs": { 00:20:48.497 "nvme_version": "1.3" 00:20:48.497 } 00:20:48.497 } 00:20:48.497 ] 00:20:48.497 }, 00:20:48.497 "name": "nvme0n1", 00:20:48.497 "num_blocks": 2097152, 00:20:48.497 "product_name": "NVMe disk", 00:20:48.497 "supported_io_types": { 00:20:48.497 "abort": true, 00:20:48.497 "compare": true, 00:20:48.497 "compare_and_write": true, 00:20:48.497 "flush": true, 00:20:48.497 "nvme_admin": true, 00:20:48.497 "nvme_io": true, 00:20:48.497 "read": true, 00:20:48.497 "reset": true, 00:20:48.497 "unmap": false, 00:20:48.497 "write": true, 00:20:48.497 "write_zeroes": true 00:20:48.497 }, 00:20:48.497 "uuid": "6308bc5f-7e30-4e85-8e37-f9f47056a368", 00:20:48.497 "zoned": false 00:20:48.497 } 00:20:48.497 ] 00:20:48.497 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.497 02:37:29 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.497 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.497 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.497 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.498 02:37:29 -- host/async_init.sh@53 -- # mktemp 00:20:48.498 02:37:29 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HlT6kp2v8j 00:20:48.498 02:37:29 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:48.498 02:37:29 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HlT6kp2v8j 00:20:48.498 02:37:29 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:48.498 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.498 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.498 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.498 02:37:29 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:48.498 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.498 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.498 [2024-11-21 02:37:29.089774] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:48.498 [2024-11-21 02:37:29.089884] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:48.498 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.498 02:37:29 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HlT6kp2v8j 00:20:48.498 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.498 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.498 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.498 02:37:29 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HlT6kp2v8j 00:20:48.498 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.498 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.498 [2024-11-21 02:37:29.109779] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.763 nvme0n1 00:20:48.763 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.763 02:37:29 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:48.763 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.763 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.763 [ 00:20:48.763 { 00:20:48.763 "aliases": [ 00:20:48.763 "6308bc5f-7e30-4e85-8e37-f9f47056a368" 00:20:48.763 ], 00:20:48.763 "assigned_rate_limits": { 00:20:48.763 "r_mbytes_per_sec": 0, 00:20:48.763 "rw_ios_per_sec": 0, 00:20:48.763 "rw_mbytes_per_sec": 0, 00:20:48.763 "w_mbytes_per_sec": 0 00:20:48.763 }, 00:20:48.763 "block_size": 512, 00:20:48.763 "claimed": false, 00:20:48.763 "driver_specific": { 00:20:48.763 "mp_policy": "active_passive", 00:20:48.763 "nvme": [ 00:20:48.763 { 00:20:48.763 "ctrlr_data": { 00:20:48.763 "ana_reporting": false, 00:20:48.763 "cntlid": 3, 00:20:48.763 "firmware_revision": "24.01.1", 00:20:48.763 "model_number": "SPDK bdev Controller", 00:20:48.763 "multi_ctrlr": true, 00:20:48.763 "oacs": { 00:20:48.763 "firmware": 0, 00:20:48.763 "format": 0, 00:20:48.763 "ns_manage": 0, 00:20:48.763 "security": 0 00:20:48.763 }, 00:20:48.763 "serial_number": "00000000000000000000", 00:20:48.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.763 "vendor_id": "0x8086" 00:20:48.763 }, 00:20:48.763 "ns_data": { 00:20:48.763 "can_share": true, 00:20:48.763 "id": 1 00:20:48.763 }, 00:20:48.763 "trid": { 00:20:48.763 "adrfam": "IPv4", 00:20:48.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:48.763 "traddr": "10.0.0.2", 00:20:48.763 "trsvcid": "4421", 00:20:48.763 "trtype": "TCP" 00:20:48.763 }, 00:20:48.763 "vs": { 00:20:48.763 "nvme_version": "1.3" 00:20:48.763 } 00:20:48.763 } 00:20:48.763 ] 00:20:48.763 }, 00:20:48.763 "name": "nvme0n1", 00:20:48.763 "num_blocks": 2097152, 00:20:48.764 "product_name": "NVMe disk", 00:20:48.764 "supported_io_types": { 00:20:48.764 "abort": true, 00:20:48.764 "compare": true, 00:20:48.764 "compare_and_write": true, 00:20:48.764 "flush": true, 00:20:48.764 "nvme_admin": true, 00:20:48.764 "nvme_io": true, 00:20:48.764 "read": true, 00:20:48.764 "reset": true, 00:20:48.764 "unmap": false, 00:20:48.764 "write": true, 00:20:48.764 "write_zeroes": true 00:20:48.764 }, 00:20:48.764 "uuid": "6308bc5f-7e30-4e85-8e37-f9f47056a368", 00:20:48.764 "zoned": false 00:20:48.764 } 00:20:48.764 ] 00:20:48.764 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.764 02:37:29 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.764 02:37:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.764 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.764 02:37:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.764 02:37:29 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.HlT6kp2v8j 00:20:48.764 02:37:29 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:48.764 02:37:29 -- host/async_init.sh@78 -- # nvmftestfini 00:20:48.764 02:37:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:48.764 02:37:29 -- nvmf/common.sh@116 -- # sync 00:20:48.764 02:37:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:48.764 02:37:29 -- nvmf/common.sh@119 -- # set +e 00:20:48.764 02:37:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:48.764 02:37:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:48.764 rmmod nvme_tcp 00:20:48.764 rmmod nvme_fabrics 00:20:48.764 rmmod nvme_keyring 00:20:48.764 02:37:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:48.764 02:37:29 -- nvmf/common.sh@123 -- # set -e 00:20:48.764 02:37:29 -- nvmf/common.sh@124 -- # return 0 00:20:48.764 02:37:29 -- nvmf/common.sh@477 -- # '[' -n 82633 ']' 00:20:48.764 02:37:29 -- nvmf/common.sh@478 -- # killprocess 82633 00:20:48.764 02:37:29 -- common/autotest_common.sh@936 -- # '[' -z 82633 ']' 00:20:48.764 02:37:29 -- common/autotest_common.sh@940 -- # kill -0 82633 00:20:48.764 02:37:29 -- common/autotest_common.sh@941 -- # uname 00:20:48.764 02:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.764 02:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82633 00:20:48.764 killing process with pid 82633 00:20:48.764 02:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.764 02:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.764 02:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82633' 00:20:48.764 02:37:29 -- common/autotest_common.sh@955 -- # kill 82633 00:20:48.764 02:37:29 -- common/autotest_common.sh@960 -- # wait 82633 00:20:49.021 02:37:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:49.021 02:37:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:49.021 02:37:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:49.021 02:37:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.021 02:37:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:49.021 02:37:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.021 02:37:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.021 02:37:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.280 02:37:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:49.280 00:20:49.280 real 0m2.750s 00:20:49.280 user 0m2.468s 00:20:49.280 sys 0m0.671s 00:20:49.280 02:37:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:49.280 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 ************************************ 00:20:49.280 END TEST nvmf_async_init 00:20:49.280 ************************************ 00:20:49.280 02:37:29 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:49.280 02:37:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:49.280 02:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.280 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 ************************************ 00:20:49.280 START TEST dma 00:20:49.280 ************************************ 00:20:49.280 02:37:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:49.280 * Looking for test storage... 00:20:49.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:49.280 02:37:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:49.280 02:37:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:49.280 02:37:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:49.280 02:37:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:49.280 02:37:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:49.280 02:37:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:49.280 02:37:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:49.280 02:37:29 -- scripts/common.sh@335 -- # IFS=.-: 00:20:49.280 02:37:29 -- scripts/common.sh@335 -- # read -ra ver1 00:20:49.280 02:37:29 -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.280 02:37:29 -- scripts/common.sh@336 -- # read -ra ver2 00:20:49.280 02:37:29 -- scripts/common.sh@337 -- # local 'op=<' 00:20:49.280 02:37:29 -- scripts/common.sh@339 -- # ver1_l=2 00:20:49.280 02:37:29 -- scripts/common.sh@340 -- # ver2_l=1 00:20:49.280 02:37:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:49.280 02:37:29 -- scripts/common.sh@343 -- # case "$op" in 00:20:49.280 02:37:29 -- scripts/common.sh@344 -- # : 1 00:20:49.280 02:37:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:49.280 02:37:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.280 02:37:29 -- scripts/common.sh@364 -- # decimal 1 00:20:49.280 02:37:29 -- scripts/common.sh@352 -- # local d=1 00:20:49.280 02:37:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.280 02:37:29 -- scripts/common.sh@354 -- # echo 1 00:20:49.280 02:37:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:49.280 02:37:29 -- scripts/common.sh@365 -- # decimal 2 00:20:49.540 02:37:29 -- scripts/common.sh@352 -- # local d=2 00:20:49.540 02:37:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.540 02:37:29 -- scripts/common.sh@354 -- # echo 2 00:20:49.540 02:37:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:49.540 02:37:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:49.540 02:37:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:49.540 02:37:29 -- scripts/common.sh@367 -- # return 0 00:20:49.540 02:37:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.540 02:37:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.540 --rc genhtml_branch_coverage=1 00:20:49.540 --rc genhtml_function_coverage=1 00:20:49.540 --rc genhtml_legend=1 00:20:49.540 --rc geninfo_all_blocks=1 00:20:49.540 --rc geninfo_unexecuted_blocks=1 00:20:49.540 00:20:49.540 ' 00:20:49.540 02:37:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.540 --rc genhtml_branch_coverage=1 00:20:49.540 --rc genhtml_function_coverage=1 00:20:49.540 --rc genhtml_legend=1 00:20:49.540 --rc geninfo_all_blocks=1 00:20:49.540 --rc geninfo_unexecuted_blocks=1 00:20:49.540 00:20:49.540 ' 00:20:49.540 02:37:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.540 --rc genhtml_branch_coverage=1 00:20:49.540 --rc genhtml_function_coverage=1 00:20:49.540 --rc genhtml_legend=1 00:20:49.540 --rc geninfo_all_blocks=1 00:20:49.540 --rc geninfo_unexecuted_blocks=1 00:20:49.540 00:20:49.540 ' 00:20:49.540 02:37:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:49.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.540 --rc genhtml_branch_coverage=1 00:20:49.540 --rc genhtml_function_coverage=1 00:20:49.540 --rc genhtml_legend=1 00:20:49.540 --rc geninfo_all_blocks=1 00:20:49.540 --rc geninfo_unexecuted_blocks=1 00:20:49.540 00:20:49.540 ' 00:20:49.540 02:37:29 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.540 02:37:29 -- nvmf/common.sh@7 -- # uname -s 00:20:49.540 02:37:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.540 02:37:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.540 02:37:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.540 02:37:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.540 02:37:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.540 02:37:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.540 02:37:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.540 02:37:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.540 02:37:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.540 02:37:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.540 02:37:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:49.540 02:37:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:49.540 02:37:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.540 02:37:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.540 02:37:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.540 02:37:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.540 02:37:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.540 02:37:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.540 02:37:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.540 02:37:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.540 02:37:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.540 02:37:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.540 02:37:29 -- paths/export.sh@5 -- # export PATH 00:20:49.540 02:37:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.540 02:37:29 -- nvmf/common.sh@46 -- # : 0 00:20:49.540 02:37:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:49.540 02:37:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:49.540 02:37:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:49.540 02:37:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.540 02:37:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.540 02:37:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:49.540 02:37:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:49.540 02:37:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:49.540 02:37:29 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:49.540 02:37:29 -- host/dma.sh@13 -- # exit 0 00:20:49.540 ************************************ 00:20:49.540 END TEST dma 00:20:49.540 ************************************ 00:20:49.540 00:20:49.540 real 0m0.210s 00:20:49.540 user 0m0.129s 00:20:49.540 sys 0m0.088s 00:20:49.540 02:37:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:49.540 02:37:29 -- common/autotest_common.sh@10 -- # set +x 00:20:49.540 02:37:30 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:49.540 02:37:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:49.540 02:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.540 02:37:30 -- common/autotest_common.sh@10 -- # set +x 00:20:49.540 ************************************ 00:20:49.540 START TEST nvmf_identify 00:20:49.540 ************************************ 00:20:49.540 02:37:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:49.540 * Looking for test storage... 00:20:49.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:49.540 02:37:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:49.540 02:37:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:49.540 02:37:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:49.800 02:37:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:49.800 02:37:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:49.800 02:37:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:49.800 02:37:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:49.800 02:37:30 -- scripts/common.sh@335 -- # IFS=.-: 00:20:49.800 02:37:30 -- scripts/common.sh@335 -- # read -ra ver1 00:20:49.800 02:37:30 -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.800 02:37:30 -- scripts/common.sh@336 -- # read -ra ver2 00:20:49.800 02:37:30 -- scripts/common.sh@337 -- # local 'op=<' 00:20:49.800 02:37:30 -- scripts/common.sh@339 -- # ver1_l=2 00:20:49.800 02:37:30 -- scripts/common.sh@340 -- # ver2_l=1 00:20:49.800 02:37:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:49.800 02:37:30 -- scripts/common.sh@343 -- # case "$op" in 00:20:49.800 02:37:30 -- scripts/common.sh@344 -- # : 1 00:20:49.800 02:37:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:49.800 02:37:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.800 02:37:30 -- scripts/common.sh@364 -- # decimal 1 00:20:49.800 02:37:30 -- scripts/common.sh@352 -- # local d=1 00:20:49.800 02:37:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.800 02:37:30 -- scripts/common.sh@354 -- # echo 1 00:20:49.800 02:37:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:49.800 02:37:30 -- scripts/common.sh@365 -- # decimal 2 00:20:49.800 02:37:30 -- scripts/common.sh@352 -- # local d=2 00:20:49.800 02:37:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.800 02:37:30 -- scripts/common.sh@354 -- # echo 2 00:20:49.800 02:37:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:49.801 02:37:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:49.801 02:37:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:49.801 02:37:30 -- scripts/common.sh@367 -- # return 0 00:20:49.801 02:37:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.801 02:37:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:49.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.801 --rc genhtml_branch_coverage=1 00:20:49.801 --rc genhtml_function_coverage=1 00:20:49.801 --rc genhtml_legend=1 00:20:49.801 --rc geninfo_all_blocks=1 00:20:49.801 --rc geninfo_unexecuted_blocks=1 00:20:49.801 00:20:49.801 ' 00:20:49.801 02:37:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:49.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.801 --rc genhtml_branch_coverage=1 00:20:49.801 --rc genhtml_function_coverage=1 00:20:49.801 --rc genhtml_legend=1 00:20:49.801 --rc geninfo_all_blocks=1 00:20:49.801 --rc geninfo_unexecuted_blocks=1 00:20:49.801 00:20:49.801 ' 00:20:49.801 02:37:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:49.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.801 --rc genhtml_branch_coverage=1 00:20:49.801 --rc genhtml_function_coverage=1 00:20:49.801 --rc genhtml_legend=1 00:20:49.801 --rc geninfo_all_blocks=1 00:20:49.801 --rc geninfo_unexecuted_blocks=1 00:20:49.801 00:20:49.801 ' 00:20:49.801 02:37:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:49.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.801 --rc genhtml_branch_coverage=1 00:20:49.801 --rc genhtml_function_coverage=1 00:20:49.801 --rc genhtml_legend=1 00:20:49.801 --rc geninfo_all_blocks=1 00:20:49.801 --rc geninfo_unexecuted_blocks=1 00:20:49.801 00:20:49.801 ' 00:20:49.801 02:37:30 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:49.801 02:37:30 -- nvmf/common.sh@7 -- # uname -s 00:20:49.801 02:37:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.801 02:37:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.801 02:37:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.801 02:37:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.801 02:37:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.801 02:37:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.801 02:37:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.801 02:37:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.801 02:37:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.801 02:37:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.801 02:37:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:49.801 02:37:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:49.801 02:37:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.801 02:37:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.801 02:37:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:49.801 02:37:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:49.801 02:37:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.801 02:37:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.801 02:37:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.801 02:37:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.801 02:37:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.801 02:37:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.801 02:37:30 -- paths/export.sh@5 -- # export PATH 00:20:49.801 02:37:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.801 02:37:30 -- nvmf/common.sh@46 -- # : 0 00:20:49.801 02:37:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:49.801 02:37:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:49.801 02:37:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:49.801 02:37:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.801 02:37:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.801 02:37:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:49.801 02:37:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:49.801 02:37:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:49.801 02:37:30 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:49.801 02:37:30 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:49.801 02:37:30 -- host/identify.sh@14 -- # nvmftestinit 00:20:49.801 02:37:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:49.801 02:37:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.801 02:37:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:49.801 02:37:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:49.801 02:37:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:49.801 02:37:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.801 02:37:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.801 02:37:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.801 02:37:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:49.801 02:37:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:49.801 02:37:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:49.801 02:37:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:49.801 02:37:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:49.801 02:37:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:49.801 02:37:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.801 02:37:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.801 02:37:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:49.801 02:37:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:49.801 02:37:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:49.801 02:37:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:49.801 02:37:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:49.801 02:37:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.801 02:37:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:49.801 02:37:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:49.801 02:37:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:49.801 02:37:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:49.801 02:37:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:49.801 02:37:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:49.801 Cannot find device "nvmf_tgt_br" 00:20:49.802 02:37:30 -- nvmf/common.sh@154 -- # true 00:20:49.802 02:37:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:49.802 Cannot find device "nvmf_tgt_br2" 00:20:49.802 02:37:30 -- nvmf/common.sh@155 -- # true 00:20:49.802 02:37:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:49.802 02:37:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:49.802 Cannot find device "nvmf_tgt_br" 00:20:49.802 02:37:30 -- nvmf/common.sh@157 -- # true 00:20:49.802 02:37:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:49.802 Cannot find device "nvmf_tgt_br2" 00:20:49.802 02:37:30 -- nvmf/common.sh@158 -- # true 00:20:49.802 02:37:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:49.802 02:37:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:49.802 02:37:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:49.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.802 02:37:30 -- nvmf/common.sh@161 -- # true 00:20:49.802 02:37:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:49.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:49.802 02:37:30 -- nvmf/common.sh@162 -- # true 00:20:49.802 02:37:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:49.802 02:37:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:49.802 02:37:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:49.802 02:37:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:49.802 02:37:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:49.802 02:37:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:49.802 02:37:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:49.802 02:37:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:49.802 02:37:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:49.802 02:37:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:49.802 02:37:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:50.061 02:37:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:50.061 02:37:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:50.061 02:37:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:50.061 02:37:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:50.061 02:37:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:50.061 02:37:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:50.061 02:37:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:50.061 02:37:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:50.061 02:37:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:50.061 02:37:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:50.061 02:37:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:50.061 02:37:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:50.061 02:37:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:50.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:50.061 00:20:50.061 --- 10.0.0.2 ping statistics --- 00:20:50.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.061 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:50.061 02:37:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:50.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:50.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:50.061 00:20:50.061 --- 10.0.0.3 ping statistics --- 00:20:50.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.061 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:50.061 02:37:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:50.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:50.061 00:20:50.061 --- 10.0.0.1 ping statistics --- 00:20:50.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.061 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:50.061 02:37:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.061 02:37:30 -- nvmf/common.sh@421 -- # return 0 00:20:50.061 02:37:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:50.061 02:37:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.061 02:37:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:50.061 02:37:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:50.061 02:37:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.061 02:37:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:50.061 02:37:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:50.061 02:37:30 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:50.061 02:37:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.061 02:37:30 -- common/autotest_common.sh@10 -- # set +x 00:20:50.061 02:37:30 -- host/identify.sh@19 -- # nvmfpid=82917 00:20:50.061 02:37:30 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:50.061 02:37:30 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.061 02:37:30 -- host/identify.sh@23 -- # waitforlisten 82917 00:20:50.061 02:37:30 -- common/autotest_common.sh@829 -- # '[' -z 82917 ']' 00:20:50.061 02:37:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.061 02:37:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.061 02:37:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.061 02:37:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.061 02:37:30 -- common/autotest_common.sh@10 -- # set +x 00:20:50.061 [2024-11-21 02:37:30.648560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:50.061 [2024-11-21 02:37:30.648659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.320 [2024-11-21 02:37:30.793339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.320 [2024-11-21 02:37:30.914205] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:50.320 [2024-11-21 02:37:30.914405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.320 [2024-11-21 02:37:30.914423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.320 [2024-11-21 02:37:30.914435] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.320 [2024-11-21 02:37:30.914588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.320 [2024-11-21 02:37:30.914899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.320 [2024-11-21 02:37:30.915594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.320 [2024-11-21 02:37:30.915683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.255 02:37:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.255 02:37:31 -- common/autotest_common.sh@862 -- # return 0 00:20:51.255 02:37:31 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 [2024-11-21 02:37:31.704206] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:51.255 02:37:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 02:37:31 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 Malloc0 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 [2024-11-21 02:37:31.834697] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:51.255 02:37:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.255 02:37:31 -- common/autotest_common.sh@10 -- # set +x 00:20:51.255 [2024-11-21 02:37:31.854405] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:51.255 [ 00:20:51.255 { 00:20:51.255 "allow_any_host": true, 00:20:51.255 "hosts": [], 00:20:51.255 "listen_addresses": [ 00:20:51.255 { 00:20:51.255 "adrfam": "IPv4", 00:20:51.255 "traddr": "10.0.0.2", 00:20:51.255 "transport": "TCP", 00:20:51.255 "trsvcid": "4420", 00:20:51.255 "trtype": "TCP" 00:20:51.255 } 00:20:51.255 ], 00:20:51.255 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:51.255 "subtype": "Discovery" 00:20:51.255 }, 00:20:51.255 { 00:20:51.255 "allow_any_host": true, 00:20:51.255 "hosts": [], 00:20:51.255 "listen_addresses": [ 00:20:51.255 { 00:20:51.255 "adrfam": "IPv4", 00:20:51.255 "traddr": "10.0.0.2", 00:20:51.255 "transport": "TCP", 00:20:51.255 "trsvcid": "4420", 00:20:51.255 "trtype": "TCP" 00:20:51.255 } 00:20:51.255 ], 00:20:51.255 "max_cntlid": 65519, 00:20:51.255 "max_namespaces": 32, 00:20:51.255 "min_cntlid": 1, 00:20:51.255 "model_number": "SPDK bdev Controller", 00:20:51.255 "namespaces": [ 00:20:51.255 { 00:20:51.255 "bdev_name": "Malloc0", 00:20:51.255 "eui64": "ABCDEF0123456789", 00:20:51.255 "name": "Malloc0", 00:20:51.255 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:51.255 "nsid": 1, 00:20:51.255 "uuid": "8fadb40d-8863-43ac-a24f-749d48b462a9" 00:20:51.255 } 00:20:51.255 ], 00:20:51.255 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.255 "serial_number": "SPDK00000000000001", 00:20:51.255 "subtype": "NVMe" 00:20:51.255 } 00:20:51.255 ] 00:20:51.255 02:37:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.255 02:37:31 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:51.255 [2024-11-21 02:37:31.896148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:51.255 [2024-11-21 02:37:31.896368] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82975 ] 00:20:51.517 [2024-11-21 02:37:32.032172] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:51.517 [2024-11-21 02:37:32.032244] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:51.517 [2024-11-21 02:37:32.032251] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:51.517 [2024-11-21 02:37:32.032261] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:51.517 [2024-11-21 02:37:32.032272] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:51.517 [2024-11-21 02:37:32.032436] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:51.517 [2024-11-21 02:37:32.032511] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7acd30 0 00:20:51.517 [2024-11-21 02:37:32.038763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:51.517 [2024-11-21 02:37:32.038784] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:51.517 [2024-11-21 02:37:32.038800] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:51.517 [2024-11-21 02:37:32.038804] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:51.517 [2024-11-21 02:37:32.038864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.038872] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.038876] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.517 [2024-11-21 02:37:32.038891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:51.517 [2024-11-21 02:37:32.038921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.517 [2024-11-21 02:37:32.046757] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.517 [2024-11-21 02:37:32.046775] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.517 [2024-11-21 02:37:32.046780] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.046794] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.517 [2024-11-21 02:37:32.046805] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:51.517 [2024-11-21 02:37:32.046813] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:51.517 [2024-11-21 02:37:32.046819] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:51.517 [2024-11-21 02:37:32.046835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.046839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.046843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.517 [2024-11-21 02:37:32.046851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.517 [2024-11-21 02:37:32.046879] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.517 [2024-11-21 02:37:32.046951] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.517 [2024-11-21 02:37:32.046957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.517 [2024-11-21 02:37:32.046960] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.046964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.517 [2024-11-21 02:37:32.046969] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:51.517 [2024-11-21 02:37:32.046976] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:51.517 [2024-11-21 02:37:32.046982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.046986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.046989] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.517 [2024-11-21 02:37:32.046996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.517 [2024-11-21 02:37:32.047014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.517 [2024-11-21 02:37:32.047108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.517 [2024-11-21 02:37:32.047114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.517 [2024-11-21 02:37:32.047117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.047120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.517 [2024-11-21 02:37:32.047125] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:51.517 [2024-11-21 02:37:32.047133] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:51.517 [2024-11-21 02:37:32.047139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.047143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.517 [2024-11-21 02:37:32.047146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.517 [2024-11-21 02:37:32.047152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.517 [2024-11-21 02:37:32.047169] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.517 [2024-11-21 02:37:32.047230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.517 [2024-11-21 02:37:32.047235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.517 [2024-11-21 02:37:32.047238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.518 [2024-11-21 02:37:32.047247] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:51.518 [2024-11-21 02:37:32.047256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047263] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.047269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.518 [2024-11-21 02:37:32.047286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.518 [2024-11-21 02:37:32.047350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.518 [2024-11-21 02:37:32.047356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.518 [2024-11-21 02:37:32.047359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.518 [2024-11-21 02:37:32.047367] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:51.518 [2024-11-21 02:37:32.047372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:51.518 [2024-11-21 02:37:32.047379] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:51.518 [2024-11-21 02:37:32.047484] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:51.518 [2024-11-21 02:37:32.047488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:51.518 [2024-11-21 02:37:32.047498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047505] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.047511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.518 [2024-11-21 02:37:32.047530] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.518 [2024-11-21 02:37:32.047598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.518 [2024-11-21 02:37:32.047604] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.518 [2024-11-21 02:37:32.047607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047610] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.518 [2024-11-21 02:37:32.047615] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:51.518 [2024-11-21 02:37:32.047623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.047637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.518 [2024-11-21 02:37:32.047654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.518 [2024-11-21 02:37:32.047720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.518 [2024-11-21 02:37:32.047726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.518 [2024-11-21 02:37:32.047729] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047733] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.518 [2024-11-21 02:37:32.047748] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:51.518 [2024-11-21 02:37:32.047754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:51.518 [2024-11-21 02:37:32.047766] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:51.518 [2024-11-21 02:37:32.047781] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:51.518 [2024-11-21 02:37:32.047792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047799] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.047806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.518 [2024-11-21 02:37:32.047827] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.518 [2024-11-21 02:37:32.047941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.518 [2024-11-21 02:37:32.047947] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.518 [2024-11-21 02:37:32.047951] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047954] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7acd30): datao=0, datal=4096, cccid=0 00:20:51.518 [2024-11-21 02:37:32.047958] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80af30) on tqpair(0x7acd30): expected_datao=0, payload_size=4096 00:20:51.518 [2024-11-21 02:37:32.047967] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047971] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.518 [2024-11-21 02:37:32.047983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.518 [2024-11-21 02:37:32.047986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.047990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.518 [2024-11-21 02:37:32.047997] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:51.518 [2024-11-21 02:37:32.048002] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:51.518 [2024-11-21 02:37:32.048006] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:51.518 [2024-11-21 02:37:32.048012] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:51.518 [2024-11-21 02:37:32.048018] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:51.518 [2024-11-21 02:37:32.048022] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:51.518 [2024-11-21 02:37:32.048035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:51.518 [2024-11-21 02:37:32.048042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.048057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.518 [2024-11-21 02:37:32.048075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.518 [2024-11-21 02:37:32.048164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.518 [2024-11-21 02:37:32.048170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.518 [2024-11-21 02:37:32.048173] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048176] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80af30) on tqpair=0x7acd30 00:20:51.518 [2024-11-21 02:37:32.048184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.048196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.518 [2024-11-21 02:37:32.048201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.048213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.518 [2024-11-21 02:37:32.048218] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.048229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.518 [2024-11-21 02:37:32.048234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.048245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.518 [2024-11-21 02:37:32.048249] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:51.518 [2024-11-21 02:37:32.048261] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:51.518 [2024-11-21 02:37:32.048268] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048272] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.518 [2024-11-21 02:37:32.048275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7acd30) 00:20:51.518 [2024-11-21 02:37:32.048281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.518 [2024-11-21 02:37:32.048300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80af30, cid 0, qid 0 00:20:51.518 [2024-11-21 02:37:32.048306] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b090, cid 1, qid 0 00:20:51.518 [2024-11-21 02:37:32.048310] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b1f0, cid 2, qid 0 00:20:51.519 [2024-11-21 02:37:32.048314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.519 [2024-11-21 02:37:32.048318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b4b0, cid 4, qid 0 00:20:51.519 [2024-11-21 02:37:32.048417] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.519 [2024-11-21 02:37:32.048423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.519 [2024-11-21 02:37:32.048426] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b4b0) on tqpair=0x7acd30 00:20:51.519 [2024-11-21 02:37:32.048435] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:51.519 [2024-11-21 02:37:32.048440] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:51.519 [2024-11-21 02:37:32.048450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7acd30) 00:20:51.519 [2024-11-21 02:37:32.048463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.519 [2024-11-21 02:37:32.048480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b4b0, cid 4, qid 0 00:20:51.519 [2024-11-21 02:37:32.048558] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.519 [2024-11-21 02:37:32.048570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.519 [2024-11-21 02:37:32.048574] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048577] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7acd30): datao=0, datal=4096, cccid=4 00:20:51.519 [2024-11-21 02:37:32.048581] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80b4b0) on tqpair(0x7acd30): expected_datao=0, payload_size=4096 00:20:51.519 [2024-11-21 02:37:32.048588] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048592] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048600] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.519 [2024-11-21 02:37:32.048605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.519 [2024-11-21 02:37:32.048607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b4b0) on tqpair=0x7acd30 00:20:51.519 [2024-11-21 02:37:32.048624] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:51.519 [2024-11-21 02:37:32.048651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048656] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7acd30) 00:20:51.519 [2024-11-21 02:37:32.048666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.519 [2024-11-21 02:37:32.048672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048676] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048680] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7acd30) 00:20:51.519 [2024-11-21 02:37:32.048685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.519 [2024-11-21 02:37:32.048709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b4b0, cid 4, qid 0 00:20:51.519 [2024-11-21 02:37:32.048716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b610, cid 5, qid 0 00:20:51.519 [2024-11-21 02:37:32.048862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.519 [2024-11-21 02:37:32.048870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.519 [2024-11-21 02:37:32.048873] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048876] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7acd30): datao=0, datal=1024, cccid=4 00:20:51.519 [2024-11-21 02:37:32.048880] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80b4b0) on tqpair(0x7acd30): expected_datao=0, payload_size=1024 00:20:51.519 [2024-11-21 02:37:32.048887] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048890] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.519 [2024-11-21 02:37:32.048900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.519 [2024-11-21 02:37:32.048902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.048906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b610) on tqpair=0x7acd30 00:20:51.519 [2024-11-21 02:37:32.093752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.519 [2024-11-21 02:37:32.093771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.519 [2024-11-21 02:37:32.093775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b4b0) on tqpair=0x7acd30 00:20:51.519 [2024-11-21 02:37:32.093802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7acd30) 00:20:51.519 [2024-11-21 02:37:32.093818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.519 [2024-11-21 02:37:32.093849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b4b0, cid 4, qid 0 00:20:51.519 [2024-11-21 02:37:32.093928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.519 [2024-11-21 02:37:32.093935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.519 [2024-11-21 02:37:32.093938] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093941] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7acd30): datao=0, datal=3072, cccid=4 00:20:51.519 [2024-11-21 02:37:32.093945] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80b4b0) on tqpair(0x7acd30): expected_datao=0, payload_size=3072 00:20:51.519 [2024-11-21 02:37:32.093952] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093956] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093963] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.519 [2024-11-21 02:37:32.093968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.519 [2024-11-21 02:37:32.093971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b4b0) on tqpair=0x7acd30 00:20:51.519 [2024-11-21 02:37:32.093984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.093988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.094014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7acd30) 00:20:51.519 [2024-11-21 02:37:32.094021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.519 [2024-11-21 02:37:32.094055] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b4b0, cid 4, qid 0 00:20:51.519 [2024-11-21 02:37:32.094160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.519 [2024-11-21 02:37:32.094166] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.519 [2024-11-21 02:37:32.094169] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.094172] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7acd30): datao=0, datal=8, cccid=4 00:20:51.519 [2024-11-21 02:37:32.094176] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x80b4b0) on tqpair(0x7acd30): expected_datao=0, payload_size=8 00:20:51.519 [2024-11-21 02:37:32.094183] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.519 [2024-11-21 02:37:32.094186] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.519 ===================================================== 00:20:51.519 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:51.519 ===================================================== 00:20:51.519 Controller Capabilities/Features 00:20:51.519 ================================ 00:20:51.519 Vendor ID: 0000 00:20:51.519 Subsystem Vendor ID: 0000 00:20:51.519 Serial Number: .................... 00:20:51.519 Model Number: ........................................ 00:20:51.519 Firmware Version: 24.01.1 00:20:51.519 Recommended Arb Burst: 0 00:20:51.519 IEEE OUI Identifier: 00 00 00 00:20:51.519 Multi-path I/O 00:20:51.519 May have multiple subsystem ports: No 00:20:51.519 May have multiple controllers: No 00:20:51.519 Associated with SR-IOV VF: No 00:20:51.519 Max Data Transfer Size: 131072 00:20:51.519 Max Number of Namespaces: 0 00:20:51.519 Max Number of I/O Queues: 1024 00:20:51.519 NVMe Specification Version (VS): 1.3 00:20:51.519 NVMe Specification Version (Identify): 1.3 00:20:51.519 Maximum Queue Entries: 128 00:20:51.519 Contiguous Queues Required: Yes 00:20:51.519 Arbitration Mechanisms Supported 00:20:51.519 Weighted Round Robin: Not Supported 00:20:51.519 Vendor Specific: Not Supported 00:20:51.519 Reset Timeout: 15000 ms 00:20:51.519 Doorbell Stride: 4 bytes 00:20:51.519 NVM Subsystem Reset: Not Supported 00:20:51.519 Command Sets Supported 00:20:51.519 NVM Command Set: Supported 00:20:51.519 Boot Partition: Not Supported 00:20:51.519 Memory Page Size Minimum: 4096 bytes 00:20:51.519 Memory Page Size Maximum: 4096 bytes 00:20:51.519 Persistent Memory Region: Not Supported 00:20:51.519 Optional Asynchronous Events Supported 00:20:51.519 Namespace Attribute Notices: Not Supported 00:20:51.519 Firmware Activation Notices: Not Supported 00:20:51.519 ANA Change Notices: Not Supported 00:20:51.519 PLE Aggregate Log Change Notices: Not Supported 00:20:51.519 LBA Status Info Alert Notices: Not Supported 00:20:51.519 EGE Aggregate Log Change Notices: Not Supported 00:20:51.519 Normal NVM Subsystem Shutdown event: Not Supported 00:20:51.519 Zone Descriptor Change Notices: Not Supported 00:20:51.519 Discovery Log Change Notices: Supported 00:20:51.519 Controller Attributes 00:20:51.519 128-bit Host Identifier: Not Supported 00:20:51.519 Non-Operational Permissive Mode: Not Supported 00:20:51.519 NVM Sets: Not Supported 00:20:51.520 Read Recovery Levels: Not Supported 00:20:51.520 Endurance Groups: Not Supported 00:20:51.520 Predictable Latency Mode: Not Supported 00:20:51.520 Traffic Based Keep ALive: Not Supported 00:20:51.520 Namespace Granularity: Not Supported 00:20:51.520 SQ Associations: Not Supported 00:20:51.520 UUID List: Not Supported 00:20:51.520 Multi-Domain Subsystem: Not Supported 00:20:51.520 Fixed Capacity Management: Not Supported 00:20:51.520 Variable Capacity Management: Not Supported 00:20:51.520 Delete Endurance Group: Not Supported 00:20:51.520 Delete NVM Set: Not Supported 00:20:51.520 Extended LBA Formats Supported: Not Supported 00:20:51.520 Flexible Data Placement Supported: Not Supported 00:20:51.520 00:20:51.520 Controller Memory Buffer Support 00:20:51.520 ================================ 00:20:51.520 Supported: No 00:20:51.520 00:20:51.520 Persistent Memory Region Support 00:20:51.520 ================================ 00:20:51.520 Supported: No 00:20:51.520 00:20:51.520 Admin Command Set Attributes 00:20:51.520 ============================ 00:20:51.520 Security Send/Receive: Not Supported 00:20:51.520 Format NVM: Not Supported 00:20:51.520 Firmware Activate/Download: Not Supported 00:20:51.520 Namespace Management: Not Supported 00:20:51.520 Device Self-Test: Not Supported 00:20:51.520 Directives: Not Supported 00:20:51.520 NVMe-MI: Not Supported 00:20:51.520 Virtualization Management: Not Supported 00:20:51.520 Doorbell Buffer Config: Not Supported 00:20:51.520 Get LBA Status Capability: Not Supported 00:20:51.520 Command & Feature Lockdown Capability: Not Supported 00:20:51.520 Abort Command Limit: 1 00:20:51.520 Async Event Request Limit: 4 00:20:51.520 Number of Firmware Slots: N/A 00:20:51.520 Firmware Slot 1 Read-Only: N/A 00:20:51.520 Fi[2024-11-21 02:37:32.134897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.520 [2024-11-21 02:37:32.134919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.520 [2024-11-21 02:37:32.134924] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.134938] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b4b0) on tqpair=0x7acd30 00:20:51.520 rmware Activation Without Reset: N/A 00:20:51.520 Multiple Update Detection Support: N/A 00:20:51.520 Firmware Update Granularity: No Information Provided 00:20:51.520 Per-Namespace SMART Log: No 00:20:51.520 Asymmetric Namespace Access Log Page: Not Supported 00:20:51.520 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:51.520 Command Effects Log Page: Not Supported 00:20:51.520 Get Log Page Extended Data: Supported 00:20:51.520 Telemetry Log Pages: Not Supported 00:20:51.520 Persistent Event Log Pages: Not Supported 00:20:51.520 Supported Log Pages Log Page: May Support 00:20:51.520 Commands Supported & Effects Log Page: Not Supported 00:20:51.520 Feature Identifiers & Effects Log Page:May Support 00:20:51.520 NVMe-MI Commands & Effects Log Page: May Support 00:20:51.520 Data Area 4 for Telemetry Log: Not Supported 00:20:51.520 Error Log Page Entries Supported: 128 00:20:51.520 Keep Alive: Not Supported 00:20:51.520 00:20:51.520 NVM Command Set Attributes 00:20:51.520 ========================== 00:20:51.520 Submission Queue Entry Size 00:20:51.520 Max: 1 00:20:51.520 Min: 1 00:20:51.520 Completion Queue Entry Size 00:20:51.520 Max: 1 00:20:51.520 Min: 1 00:20:51.520 Number of Namespaces: 0 00:20:51.520 Compare Command: Not Supported 00:20:51.520 Write Uncorrectable Command: Not Supported 00:20:51.520 Dataset Management Command: Not Supported 00:20:51.520 Write Zeroes Command: Not Supported 00:20:51.520 Set Features Save Field: Not Supported 00:20:51.520 Reservations: Not Supported 00:20:51.520 Timestamp: Not Supported 00:20:51.520 Copy: Not Supported 00:20:51.520 Volatile Write Cache: Not Present 00:20:51.520 Atomic Write Unit (Normal): 1 00:20:51.520 Atomic Write Unit (PFail): 1 00:20:51.520 Atomic Compare & Write Unit: 1 00:20:51.520 Fused Compare & Write: Supported 00:20:51.520 Scatter-Gather List 00:20:51.520 SGL Command Set: Supported 00:20:51.520 SGL Keyed: Supported 00:20:51.520 SGL Bit Bucket Descriptor: Not Supported 00:20:51.520 SGL Metadata Pointer: Not Supported 00:20:51.520 Oversized SGL: Not Supported 00:20:51.520 SGL Metadata Address: Not Supported 00:20:51.520 SGL Offset: Supported 00:20:51.520 Transport SGL Data Block: Not Supported 00:20:51.520 Replay Protected Memory Block: Not Supported 00:20:51.520 00:20:51.520 Firmware Slot Information 00:20:51.520 ========================= 00:20:51.520 Active slot: 0 00:20:51.520 00:20:51.520 00:20:51.520 Error Log 00:20:51.520 ========= 00:20:51.520 00:20:51.520 Active Namespaces 00:20:51.520 ================= 00:20:51.520 Discovery Log Page 00:20:51.520 ================== 00:20:51.520 Generation Counter: 2 00:20:51.520 Number of Records: 2 00:20:51.520 Record Format: 0 00:20:51.520 00:20:51.520 Discovery Log Entry 0 00:20:51.520 ---------------------- 00:20:51.520 Transport Type: 3 (TCP) 00:20:51.520 Address Family: 1 (IPv4) 00:20:51.520 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:51.520 Entry Flags: 00:20:51.520 Duplicate Returned Information: 1 00:20:51.520 Explicit Persistent Connection Support for Discovery: 1 00:20:51.520 Transport Requirements: 00:20:51.520 Secure Channel: Not Required 00:20:51.520 Port ID: 0 (0x0000) 00:20:51.520 Controller ID: 65535 (0xffff) 00:20:51.520 Admin Max SQ Size: 128 00:20:51.520 Transport Service Identifier: 4420 00:20:51.520 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:51.520 Transport Address: 10.0.0.2 00:20:51.520 Discovery Log Entry 1 00:20:51.520 ---------------------- 00:20:51.520 Transport Type: 3 (TCP) 00:20:51.520 Address Family: 1 (IPv4) 00:20:51.520 Subsystem Type: 2 (NVM Subsystem) 00:20:51.520 Entry Flags: 00:20:51.520 Duplicate Returned Information: 0 00:20:51.520 Explicit Persistent Connection Support for Discovery: 0 00:20:51.520 Transport Requirements: 00:20:51.520 Secure Channel: Not Required 00:20:51.520 Port ID: 0 (0x0000) 00:20:51.520 Controller ID: 65535 (0xffff) 00:20:51.520 Admin Max SQ Size: 128 00:20:51.520 Transport Service Identifier: 4420 00:20:51.520 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:51.520 Transport Address: 10.0.0.2 [2024-11-21 02:37:32.135049] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:51.520 [2024-11-21 02:37:32.135066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.520 [2024-11-21 02:37:32.135073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.520 [2024-11-21 02:37:32.135078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.520 [2024-11-21 02:37:32.135083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.520 [2024-11-21 02:37:32.135093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.135096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.135100] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.520 [2024-11-21 02:37:32.135108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.520 [2024-11-21 02:37:32.135132] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.520 [2024-11-21 02:37:32.135224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.520 [2024-11-21 02:37:32.135230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.520 [2024-11-21 02:37:32.135235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.135239] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.520 [2024-11-21 02:37:32.135246] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.135250] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.135253] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.520 [2024-11-21 02:37:32.135259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.520 [2024-11-21 02:37:32.135281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.520 [2024-11-21 02:37:32.135382] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.520 [2024-11-21 02:37:32.135388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.520 [2024-11-21 02:37:32.135391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.520 [2024-11-21 02:37:32.135394] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.520 [2024-11-21 02:37:32.135399] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:51.520 [2024-11-21 02:37:32.135404] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:51.521 [2024-11-21 02:37:32.135413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.135426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.135443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.135501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.135506] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.135510] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.135523] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.135536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.135553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.135620] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.135625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.135628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.135641] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135644] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135647] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.135653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.135670] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.135730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.135736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.135772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135776] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.135787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135794] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.135800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.135819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.135904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.135910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.135913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.135926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.135933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.135939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.135956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136023] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136038] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136042] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136142] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136150] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136154] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136157] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136249] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136252] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136265] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136268] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136290] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136354] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136358] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136478] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136490] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136580] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136593] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136602] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136606] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136632] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.521 [2024-11-21 02:37:32.136728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.521 [2024-11-21 02:37:32.136735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.521 [2024-11-21 02:37:32.136754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.521 [2024-11-21 02:37:32.136783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.521 [2024-11-21 02:37:32.136872] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.521 [2024-11-21 02:37:32.136878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.521 [2024-11-21 02:37:32.136881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.136884] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.136893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.136897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.136900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.136906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.136923] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.136993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.136999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137018] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.137109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.137114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137121] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.137219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.137225] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.137335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.137340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137347] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137357] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137360] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137386] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.137444] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.137449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137452] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137456] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.137556] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.137562] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137565] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137577] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137584] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137606] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.137664] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.137670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.137673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.137685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137689] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.137692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.137698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.137714] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.141767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.141790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.141794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.141798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.141809] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.141813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.141817] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7acd30) 00:20:51.522 [2024-11-21 02:37:32.141824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.522 [2024-11-21 02:37:32.141847] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x80b350, cid 3, qid 0 00:20:51.522 [2024-11-21 02:37:32.141910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.522 [2024-11-21 02:37:32.141916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.522 [2024-11-21 02:37:32.141919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.522 [2024-11-21 02:37:32.141922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x80b350) on tqpair=0x7acd30 00:20:51.522 [2024-11-21 02:37:32.141929] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:51.522 00:20:51.522 02:37:32 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:51.784 [2024-11-21 02:37:32.176258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:51.784 [2024-11-21 02:37:32.176293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82978 ] 00:20:51.784 [2024-11-21 02:37:32.308202] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:51.784 [2024-11-21 02:37:32.308269] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:51.784 [2024-11-21 02:37:32.308276] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:51.784 [2024-11-21 02:37:32.308286] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:51.784 [2024-11-21 02:37:32.308295] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:51.784 [2024-11-21 02:37:32.308387] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:51.784 [2024-11-21 02:37:32.308433] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfd5d30 0 00:20:51.784 [2024-11-21 02:37:32.323759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:51.784 [2024-11-21 02:37:32.323780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:51.784 [2024-11-21 02:37:32.323785] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:51.784 [2024-11-21 02:37:32.323788] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:51.784 [2024-11-21 02:37:32.323840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.784 [2024-11-21 02:37:32.323847] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.784 [2024-11-21 02:37:32.323851] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.784 [2024-11-21 02:37:32.323861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:51.784 [2024-11-21 02:37:32.323890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.784 [2024-11-21 02:37:32.331759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.784 [2024-11-21 02:37:32.331779] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.784 [2024-11-21 02:37:32.331784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.784 [2024-11-21 02:37:32.331788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.784 [2024-11-21 02:37:32.331799] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:51.784 [2024-11-21 02:37:32.331806] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:51.784 [2024-11-21 02:37:32.331811] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:51.784 [2024-11-21 02:37:32.331825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.784 [2024-11-21 02:37:32.331830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.784 [2024-11-21 02:37:32.331833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.331841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.331868] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.331941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.331947] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.331951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.331954] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.331960] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:51.785 [2024-11-21 02:37:32.331967] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:51.785 [2024-11-21 02:37:32.331974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.331977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.331981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.331987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.332005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.332073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.332079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.332085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.332095] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:51.785 [2024-11-21 02:37:32.332102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:51.785 [2024-11-21 02:37:32.332109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.332122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.332138] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.332209] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.332215] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.332218] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.332227] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:51.785 [2024-11-21 02:37:32.332236] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.332249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.332265] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.332328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.332334] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.332337] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332341] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.332346] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:51.785 [2024-11-21 02:37:32.332351] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:51.785 [2024-11-21 02:37:32.332358] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:51.785 [2024-11-21 02:37:32.332463] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:51.785 [2024-11-21 02:37:32.332467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:51.785 [2024-11-21 02:37:32.332474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332481] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.332488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.332505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.332573] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.332579] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.332583] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.332592] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:51.785 [2024-11-21 02:37:32.332601] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.332615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.332631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.332708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.332713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.332717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332720] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.332725] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:51.785 [2024-11-21 02:37:32.332730] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:51.785 [2024-11-21 02:37:32.332749] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:51.785 [2024-11-21 02:37:32.332767] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:51.785 [2024-11-21 02:37:32.332776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.332790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.785 [2024-11-21 02:37:32.332809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.332924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.785 [2024-11-21 02:37:32.332931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.785 [2024-11-21 02:37:32.332935] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332938] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=4096, cccid=0 00:20:51.785 [2024-11-21 02:37:32.332943] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1033f30) on tqpair(0xfd5d30): expected_datao=0, payload_size=4096 00:20:51.785 [2024-11-21 02:37:32.332950] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332954] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.332975] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.332978] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.332982] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.332990] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:51.785 [2024-11-21 02:37:32.332995] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:51.785 [2024-11-21 02:37:32.332999] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:51.785 [2024-11-21 02:37:32.333002] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:51.785 [2024-11-21 02:37:32.333007] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:51.785 [2024-11-21 02:37:32.333011] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:51.785 [2024-11-21 02:37:32.333023] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:51.785 [2024-11-21 02:37:32.333030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.333034] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.333037] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.785 [2024-11-21 02:37:32.333044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.785 [2024-11-21 02:37:32.333063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.785 [2024-11-21 02:37:32.333143] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.785 [2024-11-21 02:37:32.333150] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.785 [2024-11-21 02:37:32.333153] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.785 [2024-11-21 02:37:32.333156] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1033f30) on tqpair=0xfd5d30 00:20:51.785 [2024-11-21 02:37:32.333163] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333167] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.786 [2024-11-21 02:37:32.333181] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333185] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.786 [2024-11-21 02:37:32.333198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333202] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.786 [2024-11-21 02:37:32.333215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333221] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.786 [2024-11-21 02:37:32.333230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.786 [2024-11-21 02:37:32.333280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033f30, cid 0, qid 0 00:20:51.786 [2024-11-21 02:37:32.333287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034090, cid 1, qid 0 00:20:51.786 [2024-11-21 02:37:32.333291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10341f0, cid 2, qid 0 00:20:51.786 [2024-11-21 02:37:32.333295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034350, cid 3, qid 0 00:20:51.786 [2024-11-21 02:37:32.333299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.786 [2024-11-21 02:37:32.333404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.786 [2024-11-21 02:37:32.333410] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.786 [2024-11-21 02:37:32.333413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.786 [2024-11-21 02:37:32.333422] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:51.786 [2024-11-21 02:37:32.333427] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333435] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333445] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333452] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333455] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:51.786 [2024-11-21 02:37:32.333483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.786 [2024-11-21 02:37:32.333534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.786 [2024-11-21 02:37:32.333540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.786 [2024-11-21 02:37:32.333544] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.786 [2024-11-21 02:37:32.333598] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333608] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.786 [2024-11-21 02:37:32.333657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.786 [2024-11-21 02:37:32.333727] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.786 [2024-11-21 02:37:32.333733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.786 [2024-11-21 02:37:32.333736] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333752] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=4096, cccid=4 00:20:51.786 [2024-11-21 02:37:32.333758] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10344b0) on tqpair(0xfd5d30): expected_datao=0, payload_size=4096 00:20:51.786 [2024-11-21 02:37:32.333766] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333770] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.786 [2024-11-21 02:37:32.333783] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.786 [2024-11-21 02:37:32.333786] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333790] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.786 [2024-11-21 02:37:32.333806] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:51.786 [2024-11-21 02:37:32.333817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.333835] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.333848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.786 [2024-11-21 02:37:32.333868] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.786 [2024-11-21 02:37:32.333958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.786 [2024-11-21 02:37:32.333964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.786 [2024-11-21 02:37:32.333968] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333971] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=4096, cccid=4 00:20:51.786 [2024-11-21 02:37:32.333975] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10344b0) on tqpair(0xfd5d30): expected_datao=0, payload_size=4096 00:20:51.786 [2024-11-21 02:37:32.333982] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.333986] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.786 [2024-11-21 02:37:32.334011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.786 [2024-11-21 02:37:32.334015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334018] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.786 [2024-11-21 02:37:32.334036] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.334046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:51.786 [2024-11-21 02:37:32.334054] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.786 [2024-11-21 02:37:32.334068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.786 [2024-11-21 02:37:32.334087] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.786 [2024-11-21 02:37:32.334164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.786 [2024-11-21 02:37:32.334170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.786 [2024-11-21 02:37:32.334174] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334177] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=4096, cccid=4 00:20:51.786 [2024-11-21 02:37:32.334181] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10344b0) on tqpair(0xfd5d30): expected_datao=0, payload_size=4096 00:20:51.786 [2024-11-21 02:37:32.334188] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334192] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334199] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.786 [2024-11-21 02:37:32.334204] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.786 [2024-11-21 02:37:32.334208] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.786 [2024-11-21 02:37:32.334211] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.334220] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:51.787 [2024-11-21 02:37:32.334228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:51.787 [2024-11-21 02:37:32.334238] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:51.787 [2024-11-21 02:37:32.334244] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:51.787 [2024-11-21 02:37:32.334249] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:51.787 [2024-11-21 02:37:32.334254] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:51.787 [2024-11-21 02:37:32.334258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:51.787 [2024-11-21 02:37:32.334263] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:51.787 [2024-11-21 02:37:32.334284] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.787 [2024-11-21 02:37:32.334338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.787 [2024-11-21 02:37:32.334345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034610, cid 5, qid 0 00:20:51.787 [2024-11-21 02:37:32.334426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.334432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.334435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.334446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.334451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.334454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334458] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034610) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.334467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334474] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334497] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034610, cid 5, qid 0 00:20:51.787 [2024-11-21 02:37:32.334561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.334566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.334570] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034610) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.334583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034610, cid 5, qid 0 00:20:51.787 [2024-11-21 02:37:32.334670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.334676] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.334679] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034610) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.334692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034610, cid 5, qid 0 00:20:51.787 [2024-11-21 02:37:32.334799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.334807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.334810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034610) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.334827] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334847] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334854] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334866] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334884] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334888] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.334891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfd5d30) 00:20:51.787 [2024-11-21 02:37:32.334896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.787 [2024-11-21 02:37:32.334916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034610, cid 5, qid 0 00:20:51.787 [2024-11-21 02:37:32.334923] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10344b0, cid 4, qid 0 00:20:51.787 [2024-11-21 02:37:32.334927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034770, cid 6, qid 0 00:20:51.787 [2024-11-21 02:37:32.334931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10348d0, cid 7, qid 0 00:20:51.787 [2024-11-21 02:37:32.335063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.787 [2024-11-21 02:37:32.335070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.787 [2024-11-21 02:37:32.335073] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335076] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=8192, cccid=5 00:20:51.787 [2024-11-21 02:37:32.335080] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1034610) on tqpair(0xfd5d30): expected_datao=0, payload_size=8192 00:20:51.787 [2024-11-21 02:37:32.335095] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335100] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.787 [2024-11-21 02:37:32.335110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.787 [2024-11-21 02:37:32.335113] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335117] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=512, cccid=4 00:20:51.787 [2024-11-21 02:37:32.335121] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10344b0) on tqpair(0xfd5d30): expected_datao=0, payload_size=512 00:20:51.787 [2024-11-21 02:37:32.335127] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335130] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335135] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.787 [2024-11-21 02:37:32.335140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.787 [2024-11-21 02:37:32.335143] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335146] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=512, cccid=6 00:20:51.787 [2024-11-21 02:37:32.335150] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1034770) on tqpair(0xfd5d30): expected_datao=0, payload_size=512 00:20:51.787 [2024-11-21 02:37:32.335156] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335159] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:51.787 [2024-11-21 02:37:32.335169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:51.787 [2024-11-21 02:37:32.335172] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335175] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd5d30): datao=0, datal=4096, cccid=7 00:20:51.787 [2024-11-21 02:37:32.335178] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10348d0) on tqpair(0xfd5d30): expected_datao=0, payload_size=4096 00:20:51.787 [2024-11-21 02:37:32.335184] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335188] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.335199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.335203] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.787 [2024-11-21 02:37:32.335206] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034610) on tqpair=0xfd5d30 00:20:51.787 [2024-11-21 02:37:32.335224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.787 [2024-11-21 02:37:32.335230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.787 [2024-11-21 02:37:32.335233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.788 [2024-11-21 02:37:32.335237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10344b0) on tqpair=0xfd5d30 00:20:51.788 [2024-11-21 02:37:32.335247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.788 [2024-11-21 02:37:32.335252] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.788 [2024-11-21 02:37:32.335255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.788 [2024-11-21 02:37:32.335259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034770) on tqpair=0xfd5d30 00:20:51.788 [2024-11-21 02:37:32.335266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.788 [2024-11-21 02:37:32.335271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.788 [2024-11-21 02:37:32.335274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.788 [2024-11-21 02:37:32.335277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10348d0) on tqpair=0xfd5d30===================================================== 00:20:51.788 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.788 ===================================================== 00:20:51.788 Controller Capabilities/Features 00:20:51.788 ================================ 00:20:51.788 Vendor ID: 8086 00:20:51.788 Subsystem Vendor ID: 8086 00:20:51.788 Serial Number: SPDK00000000000001 00:20:51.788 Model Number: SPDK bdev Controller 00:20:51.788 Firmware Version: 24.01.1 00:20:51.788 Recommended Arb Burst: 6 00:20:51.788 IEEE OUI Identifier: e4 d2 5c 00:20:51.788 Multi-path I/O 00:20:51.788 May have multiple subsystem ports: Yes 00:20:51.788 May have multiple controllers: Yes 00:20:51.788 Associated with SR-IOV VF: No 00:20:51.788 Max Data Transfer Size: 131072 00:20:51.788 Max Number of Namespaces: 32 00:20:51.788 Max Number of I/O Queues: 127 00:20:51.788 NVMe Specification Version (VS): 1.3 00:20:51.788 NVMe Specification Version (Identify): 1.3 00:20:51.788 Maximum Queue Entries: 128 00:20:51.788 Contiguous Queues Required: Yes 00:20:51.788 Arbitration Mechanisms Supported 00:20:51.788 Weighted Round Robin: Not Supported 00:20:51.788 Vendor Specific: Not Supported 00:20:51.788 Reset Timeout: 15000 ms 00:20:51.788 Doorbell Stride: 4 bytes 00:20:51.788 NVM Subsystem Reset: Not Supported 00:20:51.788 Command Sets Supported 00:20:51.788 NVM Command Set: Supported 00:20:51.788 Boot Partition: Not Supported 00:20:51.788 Memory Page Size Minimum: 4096 bytes 00:20:51.788 Memory Page Size Maximum: 4096 bytes 00:20:51.788 Persistent Memory Region: Not Supported 00:20:51.788 Optional Asynchronous Events Supported 00:20:51.788 Namespace Attribute Notices: Supported 00:20:51.788 Firmware Activation Notices: Not Supported 00:20:51.788 ANA Change Notices: Not Supported 00:20:51.788 PLE Aggregate Log Change Notices: Not Supported 00:20:51.788 LBA Status Info Alert Notices: Not Supported 00:20:51.788 EGE Aggregate Log Change Notices: Not Supported 00:20:51.788 Normal NVM Subsystem Shutdown event: Not Supported 00:20:51.788 Zone Descriptor Change Notices: Not Supported 00:20:51.788 Discovery Log Change Notices: Not Supported 00:20:51.788 Controller Attributes 00:20:51.788 128-bit Host Identifier: Supported 00:20:51.788 Non-Operational Permissive Mode: Not Supported 00:20:51.788 NVM Sets: Not Supported 00:20:51.788 Read Recovery Levels: Not Supported 00:20:51.788 Endurance Groups: Not Supported 00:20:51.788 Predictable Latency Mode: Not Supported 00:20:51.788 Traffic Based Keep ALive: Not Supported 00:20:51.788 Namespace Granularity: Not Supported 00:20:51.788 SQ Associations: Not Supported 00:20:51.788 UUID List: Not Supported 00:20:51.788 Multi-Domain Subsystem: Not Supported 00:20:51.788 Fixed Capacity Management: Not Supported 00:20:51.788 Variable Capacity Management: Not Supported 00:20:51.788 Delete Endurance Group: Not Supported 00:20:51.788 Delete NVM Set: Not Supported 00:20:51.788 Extended LBA Formats Supported: Not Supported 00:20:51.788 Flexible Data Placement Supported: Not Supported 00:20:51.788 00:20:51.788 Controller Memory Buffer Support 00:20:51.788 ================================ 00:20:51.788 Supported: No 00:20:51.788 00:20:51.788 Persistent Memory Region Support 00:20:51.788 ================================ 00:20:51.788 Supported: No 00:20:51.788 00:20:51.788 Admin Command Set Attributes 00:20:51.788 ============================ 00:20:51.788 Security Send/Receive: Not Supported 00:20:51.788 Format NVM: Not Supported 00:20:51.788 Firmware Activate/Download: Not Supported 00:20:51.788 Namespace Management: Not Supported 00:20:51.788 Device Self-Test: Not Supported 00:20:51.788 Directives: Not Supported 00:20:51.788 NVMe-MI: Not Supported 00:20:51.788 Virtualization Management: Not Supported 00:20:51.788 Doorbell Buffer Config: Not Supported 00:20:51.788 Get LBA Status Capability: Not Supported 00:20:51.788 Command & Feature Lockdown Capability: Not Supported 00:20:51.788 Abort Command Limit: 4 00:20:51.788 Async Event Request Limit: 4 00:20:51.788 Number of Firmware Slots: N/A 00:20:51.788 Firmware Slot 1 Read-Only: N/A 00:20:51.788 Firmware Activation Without Reset: N/A 00:20:51.788 Multiple Update Detection Support: N/A 00:20:51.788 Firmware Update Granularity: No Information Provided 00:20:51.788 Per-Namespace SMART Log: No 00:20:51.788 Asymmetric Namespace Access Log Page: Not Supported 00:20:51.788 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:51.788 Command Effects Log Page: Supported 00:20:51.788 Get Log Page Extended Data: Supported 00:20:51.788 Telemetry Log Pages: Not Supported 00:20:51.788 Persistent Event Log Pages: Not Supported 00:20:51.788 Supported Log Pages Log Page: May Support 00:20:51.788 Commands Supported & Effects Log Page: Not Supported 00:20:51.788 Feature Identifiers & Effects Log Page:May Support 00:20:51.788 NVMe-MI Commands & Effects Log Page: May Support 00:20:51.788 Data Area 4 for Telemetry Log: Not Supported 00:20:51.788 Error Log Page Entries Supported: 128 00:20:51.788 Keep Alive: Supported 00:20:51.788 Keep Alive Granularity: 10000 ms 00:20:51.788 00:20:51.788 NVM Command Set Attributes 00:20:51.788 ========================== 00:20:51.788 Submission Queue Entry Size 00:20:51.788 Max: 64 00:20:51.788 Min: 64 00:20:51.788 Completion Queue Entry Size 00:20:51.788 Max: 16 00:20:51.788 Min: 16 00:20:51.788 Number of Namespaces: 32 00:20:51.788 Compare Command: Supported 00:20:51.788 Write Uncorrectable Command: Not Supported 00:20:51.788 Dataset Management Command: Supported 00:20:51.788 Write Zeroes Command: Supported 00:20:51.788 Set Features Save Field: Not Supported 00:20:51.788 Reservations: Supported 00:20:51.788 Timestamp: Not Supported 00:20:51.788 Copy: Supported 00:20:51.788 Volatile Write Cache: Present 00:20:51.788 Atomic Write Unit (Normal): 1 00:20:51.788 Atomic Write Unit (PFail): 1 00:20:51.788 Atomic Compare & Write Unit: 1 00:20:51.788 Fused Compare & Write: Supported 00:20:51.788 Scatter-Gather List 00:20:51.788 SGL Command Set: Supported 00:20:51.788 SGL Keyed: Supported 00:20:51.788 SGL Bit Bucket Descriptor: Not Supported 00:20:51.788 SGL Metadata Pointer: Not Supported 00:20:51.788 Oversized SGL: Not Supported 00:20:51.788 SGL Metadata Address: Not Supported 00:20:51.788 SGL Offset: Supported 00:20:51.788 Transport SGL Data Block: Not Supported 00:20:51.788 Replay Protected Memory Block: Not Supported 00:20:51.788 00:20:51.788 Firmware Slot Information 00:20:51.788 ========================= 00:20:51.788 Active slot: 1 00:20:51.788 Slot 1 Firmware Revision: 24.01.1 00:20:51.788 00:20:51.788 00:20:51.788 Commands Supported and Effects 00:20:51.788 ============================== 00:20:51.788 Admin Commands 00:20:51.788 -------------- 00:20:51.788 Get Log Page (02h): Supported 00:20:51.788 Identify (06h): Supported 00:20:51.788 Abort (08h): Supported 00:20:51.788 Set Features (09h): Supported 00:20:51.788 Get Features (0Ah): Supported 00:20:51.788 Asynchronous Event Request (0Ch): Supported 00:20:51.788 Keep Alive (18h): Supported 00:20:51.788 I/O Commands 00:20:51.788 ------------ 00:20:51.788 Flush (00h): Supported LBA-Change 00:20:51.788 Write (01h): Supported LBA-Change 00:20:51.788 Read (02h): Supported 00:20:51.788 Compare (05h): Supported 00:20:51.788 Write Zeroes (08h): Supported LBA-Change 00:20:51.788 Dataset Management (09h): Supported LBA-Change 00:20:51.788 Copy (19h): Supported LBA-Change 00:20:51.788 Unknown (79h): Supported LBA-Change 00:20:51.788 Unknown (7Ah): Supported 00:20:51.788 00:20:51.788 Error Log 00:20:51.788 ========= 00:20:51.788 00:20:51.788 Arbitration 00:20:51.788 =========== 00:20:51.788 Arbitration Burst: 1 00:20:51.788 00:20:51.788 Power Management 00:20:51.788 ================ 00:20:51.788 Number of Power States: 1 00:20:51.788 Current Power State: Power State #0 00:20:51.788 Power State #0: 00:20:51.788 Max Power: 0.00 W 00:20:51.788 Non-Operational State: Operational 00:20:51.788 Entry Latency: Not Reported 00:20:51.788 Exit Latency: Not Reported 00:20:51.788 Relative Read Throughput: 0 00:20:51.788 Relative Read Latency: 0 00:20:51.788 Relative Write Throughput: 0 00:20:51.788 Relative Write Latency: 0 00:20:51.788 Idle Power: Not Reported 00:20:51.788 Active Power: Not Reported 00:20:51.788 Non-Operational Permissive Mode: Not Supported 00:20:51.788 00:20:51.788 Health Information 00:20:51.788 ================== 00:20:51.788 Critical Warnings: 00:20:51.788 Available Spare Space: OK 00:20:51.788 Temperature: OK 00:20:51.788 Device Reliability: OK 00:20:51.788 Read Only: No 00:20:51.788 Volatile Memory Backup: OK 00:20:51.788 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:51.788 Temperature Threshold: 00:20:51.789 [2024-11-21 02:37:32.335374] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335380] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfd5d30) 00:20:51.789 [2024-11-21 02:37:32.335390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.789 [2024-11-21 02:37:32.335411] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10348d0, cid 7, qid 0 00:20:51.789 [2024-11-21 02:37:32.335482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.789 [2024-11-21 02:37:32.335489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.789 [2024-11-21 02:37:32.335492] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335496] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10348d0) on tqpair=0xfd5d30 00:20:51.789 [2024-11-21 02:37:32.335528] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:51.789 [2024-11-21 02:37:32.335539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.789 [2024-11-21 02:37:32.335545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.789 [2024-11-21 02:37:32.335551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.789 [2024-11-21 02:37:32.335556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.789 [2024-11-21 02:37:32.335563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335570] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd5d30) 00:20:51.789 [2024-11-21 02:37:32.335576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.789 [2024-11-21 02:37:32.335596] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034350, cid 3, qid 0 00:20:51.789 [2024-11-21 02:37:32.335662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.789 [2024-11-21 02:37:32.335668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.789 [2024-11-21 02:37:32.335671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335675] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034350) on tqpair=0xfd5d30 00:20:51.789 [2024-11-21 02:37:32.335683] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335686] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.335690] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd5d30) 00:20:51.789 [2024-11-21 02:37:32.335696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.789 [2024-11-21 02:37:32.335715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034350, cid 3, qid 0 00:20:51.789 [2024-11-21 02:37:32.339753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.789 [2024-11-21 02:37:32.339771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.789 [2024-11-21 02:37:32.339775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.339779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034350) on tqpair=0xfd5d30 00:20:51.789 [2024-11-21 02:37:32.339785] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:51.789 [2024-11-21 02:37:32.339789] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:51.789 [2024-11-21 02:37:32.339801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.339806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.339809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd5d30) 00:20:51.789 [2024-11-21 02:37:32.339816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.789 [2024-11-21 02:37:32.339839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1034350, cid 3, qid 0 00:20:51.789 [2024-11-21 02:37:32.339903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:51.789 [2024-11-21 02:37:32.339909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:51.789 [2024-11-21 02:37:32.339912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:51.789 [2024-11-21 02:37:32.339916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1034350) on tqpair=0xfd5d30 00:20:51.789 [2024-11-21 02:37:32.339925] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:51.789 0 Kelvin (-273 Celsius) 00:20:51.789 Available Spare: 0% 00:20:51.789 Available Spare Threshold: 0% 00:20:51.789 Life Percentage Used: 0% 00:20:51.789 Data Units Read: 0 00:20:51.789 Data Units Written: 0 00:20:51.789 Host Read Commands: 0 00:20:51.789 Host Write Commands: 0 00:20:51.789 Controller Busy Time: 0 minutes 00:20:51.789 Power Cycles: 0 00:20:51.789 Power On Hours: 0 hours 00:20:51.789 Unsafe Shutdowns: 0 00:20:51.789 Unrecoverable Media Errors: 0 00:20:51.789 Lifetime Error Log Entries: 0 00:20:51.789 Warning Temperature Time: 0 minutes 00:20:51.789 Critical Temperature Time: 0 minutes 00:20:51.789 00:20:51.789 Number of Queues 00:20:51.789 ================ 00:20:51.789 Number of I/O Submission Queues: 127 00:20:51.789 Number of I/O Completion Queues: 127 00:20:51.789 00:20:51.789 Active Namespaces 00:20:51.789 ================= 00:20:51.789 Namespace ID:1 00:20:51.789 Error Recovery Timeout: Unlimited 00:20:51.789 Command Set Identifier: NVM (00h) 00:20:51.789 Deallocate: Supported 00:20:51.789 Deallocated/Unwritten Error: Not Supported 00:20:51.789 Deallocated Read Value: Unknown 00:20:51.789 Deallocate in Write Zeroes: Not Supported 00:20:51.789 Deallocated Guard Field: 0xFFFF 00:20:51.789 Flush: Supported 00:20:51.789 Reservation: Supported 00:20:51.789 Namespace Sharing Capabilities: Multiple Controllers 00:20:51.789 Size (in LBAs): 131072 (0GiB) 00:20:51.789 Capacity (in LBAs): 131072 (0GiB) 00:20:51.789 Utilization (in LBAs): 131072 (0GiB) 00:20:51.789 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:51.789 EUI64: ABCDEF0123456789 00:20:51.789 UUID: 8fadb40d-8863-43ac-a24f-749d48b462a9 00:20:51.789 Thin Provisioning: Not Supported 00:20:51.789 Per-NS Atomic Units: Yes 00:20:51.789 Atomic Boundary Size (Normal): 0 00:20:51.789 Atomic Boundary Size (PFail): 0 00:20:51.789 Atomic Boundary Offset: 0 00:20:51.789 Maximum Single Source Range Length: 65535 00:20:51.789 Maximum Copy Length: 65535 00:20:51.789 Maximum Source Range Count: 1 00:20:51.789 NGUID/EUI64 Never Reused: No 00:20:51.789 Namespace Write Protected: No 00:20:51.789 Number of LBA Formats: 1 00:20:51.789 Current LBA Format: LBA Format #00 00:20:51.789 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:51.789 00:20:51.789 02:37:32 -- host/identify.sh@51 -- # sync 00:20:51.789 02:37:32 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.789 02:37:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.789 02:37:32 -- common/autotest_common.sh@10 -- # set +x 00:20:52.048 02:37:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.048 02:37:32 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:52.048 02:37:32 -- host/identify.sh@56 -- # nvmftestfini 00:20:52.048 02:37:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:52.048 02:37:32 -- nvmf/common.sh@116 -- # sync 00:20:52.048 02:37:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:52.048 02:37:32 -- nvmf/common.sh@119 -- # set +e 00:20:52.048 02:37:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:52.048 02:37:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:52.048 rmmod nvme_tcp 00:20:52.048 rmmod nvme_fabrics 00:20:52.048 rmmod nvme_keyring 00:20:52.048 02:37:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:52.048 02:37:32 -- nvmf/common.sh@123 -- # set -e 00:20:52.048 02:37:32 -- nvmf/common.sh@124 -- # return 0 00:20:52.048 02:37:32 -- nvmf/common.sh@477 -- # '[' -n 82917 ']' 00:20:52.048 02:37:32 -- nvmf/common.sh@478 -- # killprocess 82917 00:20:52.048 02:37:32 -- common/autotest_common.sh@936 -- # '[' -z 82917 ']' 00:20:52.048 02:37:32 -- common/autotest_common.sh@940 -- # kill -0 82917 00:20:52.048 02:37:32 -- common/autotest_common.sh@941 -- # uname 00:20:52.048 02:37:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.048 02:37:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82917 00:20:52.048 02:37:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:52.048 02:37:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:52.048 killing process with pid 82917 00:20:52.048 02:37:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82917' 00:20:52.048 02:37:32 -- common/autotest_common.sh@955 -- # kill 82917 00:20:52.048 [2024-11-21 02:37:32.521847] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:52.048 02:37:32 -- common/autotest_common.sh@960 -- # wait 82917 00:20:52.306 02:37:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:52.306 02:37:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:52.306 02:37:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:52.306 02:37:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.306 02:37:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:52.306 02:37:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.307 02:37:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.307 02:37:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.307 02:37:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:52.307 ************************************ 00:20:52.307 END TEST nvmf_identify 00:20:52.307 ************************************ 00:20:52.307 00:20:52.307 real 0m2.897s 00:20:52.307 user 0m8.029s 00:20:52.307 sys 0m0.726s 00:20:52.307 02:37:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:52.307 02:37:32 -- common/autotest_common.sh@10 -- # set +x 00:20:52.566 02:37:32 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:52.566 02:37:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:52.566 02:37:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.566 02:37:32 -- common/autotest_common.sh@10 -- # set +x 00:20:52.566 ************************************ 00:20:52.566 START TEST nvmf_perf 00:20:52.566 ************************************ 00:20:52.566 02:37:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:52.566 * Looking for test storage... 00:20:52.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:52.566 02:37:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:52.566 02:37:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:52.566 02:37:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:52.566 02:37:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:52.566 02:37:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:52.566 02:37:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:52.566 02:37:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:52.566 02:37:33 -- scripts/common.sh@335 -- # IFS=.-: 00:20:52.566 02:37:33 -- scripts/common.sh@335 -- # read -ra ver1 00:20:52.566 02:37:33 -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.566 02:37:33 -- scripts/common.sh@336 -- # read -ra ver2 00:20:52.566 02:37:33 -- scripts/common.sh@337 -- # local 'op=<' 00:20:52.566 02:37:33 -- scripts/common.sh@339 -- # ver1_l=2 00:20:52.566 02:37:33 -- scripts/common.sh@340 -- # ver2_l=1 00:20:52.566 02:37:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:52.566 02:37:33 -- scripts/common.sh@343 -- # case "$op" in 00:20:52.566 02:37:33 -- scripts/common.sh@344 -- # : 1 00:20:52.566 02:37:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:52.566 02:37:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.566 02:37:33 -- scripts/common.sh@364 -- # decimal 1 00:20:52.566 02:37:33 -- scripts/common.sh@352 -- # local d=1 00:20:52.566 02:37:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.566 02:37:33 -- scripts/common.sh@354 -- # echo 1 00:20:52.566 02:37:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:52.566 02:37:33 -- scripts/common.sh@365 -- # decimal 2 00:20:52.566 02:37:33 -- scripts/common.sh@352 -- # local d=2 00:20:52.566 02:37:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.566 02:37:33 -- scripts/common.sh@354 -- # echo 2 00:20:52.566 02:37:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:52.566 02:37:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:52.566 02:37:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:52.566 02:37:33 -- scripts/common.sh@367 -- # return 0 00:20:52.566 02:37:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.566 02:37:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:52.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.566 --rc genhtml_branch_coverage=1 00:20:52.566 --rc genhtml_function_coverage=1 00:20:52.566 --rc genhtml_legend=1 00:20:52.566 --rc geninfo_all_blocks=1 00:20:52.566 --rc geninfo_unexecuted_blocks=1 00:20:52.566 00:20:52.566 ' 00:20:52.566 02:37:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:52.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.567 --rc genhtml_branch_coverage=1 00:20:52.567 --rc genhtml_function_coverage=1 00:20:52.567 --rc genhtml_legend=1 00:20:52.567 --rc geninfo_all_blocks=1 00:20:52.567 --rc geninfo_unexecuted_blocks=1 00:20:52.567 00:20:52.567 ' 00:20:52.567 02:37:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:52.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.567 --rc genhtml_branch_coverage=1 00:20:52.567 --rc genhtml_function_coverage=1 00:20:52.567 --rc genhtml_legend=1 00:20:52.567 --rc geninfo_all_blocks=1 00:20:52.567 --rc geninfo_unexecuted_blocks=1 00:20:52.567 00:20:52.567 ' 00:20:52.567 02:37:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:52.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.567 --rc genhtml_branch_coverage=1 00:20:52.567 --rc genhtml_function_coverage=1 00:20:52.567 --rc genhtml_legend=1 00:20:52.567 --rc geninfo_all_blocks=1 00:20:52.567 --rc geninfo_unexecuted_blocks=1 00:20:52.567 00:20:52.567 ' 00:20:52.567 02:37:33 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.567 02:37:33 -- nvmf/common.sh@7 -- # uname -s 00:20:52.567 02:37:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.567 02:37:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.567 02:37:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.567 02:37:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.567 02:37:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.567 02:37:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.567 02:37:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.567 02:37:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.567 02:37:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.567 02:37:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.567 02:37:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:52.567 02:37:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:20:52.567 02:37:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.567 02:37:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.567 02:37:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:52.567 02:37:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.567 02:37:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.567 02:37:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.567 02:37:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.567 02:37:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.567 02:37:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.567 02:37:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.567 02:37:33 -- paths/export.sh@5 -- # export PATH 00:20:52.567 02:37:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.567 02:37:33 -- nvmf/common.sh@46 -- # : 0 00:20:52.567 02:37:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:52.567 02:37:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:52.567 02:37:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:52.567 02:37:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.567 02:37:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.567 02:37:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:52.567 02:37:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:52.567 02:37:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:52.567 02:37:33 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:52.567 02:37:33 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:52.567 02:37:33 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:52.567 02:37:33 -- host/perf.sh@17 -- # nvmftestinit 00:20:52.567 02:37:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:52.567 02:37:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.567 02:37:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:52.567 02:37:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:52.567 02:37:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:52.567 02:37:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.567 02:37:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.567 02:37:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.567 02:37:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:52.567 02:37:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:52.567 02:37:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:52.567 02:37:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:52.567 02:37:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:52.567 02:37:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:52.567 02:37:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.567 02:37:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.567 02:37:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:52.567 02:37:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:52.567 02:37:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:52.567 02:37:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:52.567 02:37:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:52.567 02:37:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.567 02:37:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:52.567 02:37:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:52.567 02:37:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:52.567 02:37:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:52.567 02:37:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:52.567 02:37:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:52.567 Cannot find device "nvmf_tgt_br" 00:20:52.567 02:37:33 -- nvmf/common.sh@154 -- # true 00:20:52.567 02:37:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:52.826 Cannot find device "nvmf_tgt_br2" 00:20:52.826 02:37:33 -- nvmf/common.sh@155 -- # true 00:20:52.826 02:37:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:52.826 02:37:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:52.826 Cannot find device "nvmf_tgt_br" 00:20:52.826 02:37:33 -- nvmf/common.sh@157 -- # true 00:20:52.826 02:37:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:52.826 Cannot find device "nvmf_tgt_br2" 00:20:52.826 02:37:33 -- nvmf/common.sh@158 -- # true 00:20:52.826 02:37:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:52.826 02:37:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:52.826 02:37:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:52.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:52.826 02:37:33 -- nvmf/common.sh@161 -- # true 00:20:52.826 02:37:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:52.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:52.826 02:37:33 -- nvmf/common.sh@162 -- # true 00:20:52.826 02:37:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:52.826 02:37:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:52.826 02:37:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:52.826 02:37:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:52.826 02:37:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:52.826 02:37:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:52.826 02:37:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:52.826 02:37:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:52.826 02:37:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:52.826 02:37:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:52.826 02:37:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:52.826 02:37:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:52.826 02:37:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:52.826 02:37:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:52.826 02:37:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:52.826 02:37:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:52.826 02:37:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:52.826 02:37:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:52.826 02:37:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:52.826 02:37:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:52.826 02:37:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:53.085 02:37:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:53.085 02:37:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:53.085 02:37:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:53.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:53.085 00:20:53.085 --- 10.0.0.2 ping statistics --- 00:20:53.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.085 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:53.085 02:37:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:53.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:53.085 00:20:53.085 --- 10.0.0.3 ping statistics --- 00:20:53.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.085 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:53.085 02:37:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:53.085 00:20:53.085 --- 10.0.0.1 ping statistics --- 00:20:53.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.085 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:53.085 02:37:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.085 02:37:33 -- nvmf/common.sh@421 -- # return 0 00:20:53.085 02:37:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:53.085 02:37:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.085 02:37:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:53.085 02:37:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:53.085 02:37:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.085 02:37:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:53.085 02:37:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:53.085 02:37:33 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:53.085 02:37:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:53.085 02:37:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.085 02:37:33 -- common/autotest_common.sh@10 -- # set +x 00:20:53.085 02:37:33 -- nvmf/common.sh@469 -- # nvmfpid=83152 00:20:53.085 02:37:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.085 02:37:33 -- nvmf/common.sh@470 -- # waitforlisten 83152 00:20:53.085 02:37:33 -- common/autotest_common.sh@829 -- # '[' -z 83152 ']' 00:20:53.085 02:37:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.085 02:37:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.085 02:37:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.085 02:37:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.085 02:37:33 -- common/autotest_common.sh@10 -- # set +x 00:20:53.085 [2024-11-21 02:37:33.585184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:53.085 [2024-11-21 02:37:33.585280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.085 [2024-11-21 02:37:33.719505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.343 [2024-11-21 02:37:33.805949] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:53.343 [2024-11-21 02:37:33.806105] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.343 [2024-11-21 02:37:33.806118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.343 [2024-11-21 02:37:33.806127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.343 [2024-11-21 02:37:33.806277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.343 [2024-11-21 02:37:33.806438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.343 [2024-11-21 02:37:33.807097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.343 [2024-11-21 02:37:33.807143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.911 02:37:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.911 02:37:34 -- common/autotest_common.sh@862 -- # return 0 00:20:53.911 02:37:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:53.911 02:37:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.911 02:37:34 -- common/autotest_common.sh@10 -- # set +x 00:20:54.170 02:37:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.170 02:37:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:54.170 02:37:34 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:54.428 02:37:35 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:54.428 02:37:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:54.687 02:37:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:54.687 02:37:35 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:54.946 02:37:35 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:54.946 02:37:35 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:54.946 02:37:35 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:54.946 02:37:35 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:54.946 02:37:35 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.205 [2024-11-21 02:37:35.734409] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.205 02:37:35 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.463 02:37:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:55.463 02:37:36 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:55.722 02:37:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:55.722 02:37:36 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:55.980 02:37:36 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.239 [2024-11-21 02:37:36.652213] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.239 02:37:36 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:56.239 02:37:36 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:56.239 02:37:36 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:56.239 02:37:36 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:56.240 02:37:36 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:57.614 Initializing NVMe Controllers 00:20:57.614 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:57.614 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:57.614 Initialization complete. Launching workers. 00:20:57.614 ======================================================== 00:20:57.614 Latency(us) 00:20:57.614 Device Information : IOPS MiB/s Average min max 00:20:57.614 PCIE (0000:00:06.0) NSID 1 from core 0: 23584.00 92.12 1356.83 340.20 5625.87 00:20:57.614 ======================================================== 00:20:57.614 Total : 23584.00 92.12 1356.83 340.20 5625.87 00:20:57.614 00:20:57.614 02:37:37 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:58.990 Initializing NVMe Controllers 00:20:58.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:58.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:58.991 Initialization complete. Launching workers. 00:20:58.991 ======================================================== 00:20:58.991 Latency(us) 00:20:58.991 Device Information : IOPS MiB/s Average min max 00:20:58.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3718.19 14.52 268.67 97.39 5133.68 00:20:58.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8096.46 5017.24 12024.65 00:20:58.991 ======================================================== 00:20:58.991 Total : 3841.70 15.01 520.33 97.39 12024.65 00:20:58.991 00:20:58.991 02:37:39 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:00.367 Initializing NVMe Controllers 00:21:00.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:00.367 Initialization complete. Launching workers. 00:21:00.367 ======================================================== 00:21:00.367 Latency(us) 00:21:00.367 Device Information : IOPS MiB/s Average min max 00:21:00.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10367.73 40.50 3088.06 442.33 7140.51 00:21:00.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2696.37 10.53 11967.91 7089.76 20745.90 00:21:00.367 ======================================================== 00:21:00.367 Total : 13064.10 51.03 4920.82 442.33 20745.90 00:21:00.367 00:21:00.367 02:37:40 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:00.367 02:37:40 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.929 [2024-11-21 02:37:43.115805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115921] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.115995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116023] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 [2024-11-21 02:37:43.116107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7c50 is same with the state(5) to be set 00:21:02.929 Initializing NVMe Controllers 00:21:02.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.929 Controller IO queue size 128, less than required. 00:21:02.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.929 Controller IO queue size 128, less than required. 00:21:02.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:02.929 Initialization complete. Launching workers. 00:21:02.929 ======================================================== 00:21:02.929 Latency(us) 00:21:02.929 Device Information : IOPS MiB/s Average min max 00:21:02.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1645.61 411.40 78929.41 56234.30 158138.09 00:21:02.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 534.90 133.72 243398.22 74971.03 397974.04 00:21:02.929 ======================================================== 00:21:02.929 Total : 2180.51 545.13 119275.08 56234.30 397974.04 00:21:02.929 00:21:02.929 02:37:43 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:02.929 No valid NVMe controllers or AIO or URING devices found 00:21:02.929 Initializing NVMe Controllers 00:21:02.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.929 Controller IO queue size 128, less than required. 00:21:02.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:02.929 Controller IO queue size 128, less than required. 00:21:02.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.929 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:02.929 WARNING: Some requested NVMe devices were skipped 00:21:02.930 02:37:43 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:05.465 Initializing NVMe Controllers 00:21:05.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.465 Controller IO queue size 128, less than required. 00:21:05.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.465 Controller IO queue size 128, less than required. 00:21:05.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:05.465 Initialization complete. Launching workers. 00:21:05.465 00:21:05.465 ==================== 00:21:05.465 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:05.465 TCP transport: 00:21:05.465 polls: 12027 00:21:05.465 idle_polls: 8527 00:21:05.465 sock_completions: 3500 00:21:05.465 nvme_completions: 3829 00:21:05.465 submitted_requests: 5905 00:21:05.465 queued_requests: 1 00:21:05.465 00:21:05.465 ==================== 00:21:05.465 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:05.465 TCP transport: 00:21:05.465 polls: 12575 00:21:05.465 idle_polls: 9220 00:21:05.465 sock_completions: 3355 00:21:05.465 nvme_completions: 6515 00:21:05.465 submitted_requests: 9965 00:21:05.465 queued_requests: 1 00:21:05.465 ======================================================== 00:21:05.465 Latency(us) 00:21:05.465 Device Information : IOPS MiB/s Average min max 00:21:05.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1020.74 255.18 129726.84 83233.68 221255.32 00:21:05.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1691.57 422.89 75844.45 39709.40 117906.43 00:21:05.465 ======================================================== 00:21:05.465 Total : 2712.30 678.08 96122.34 39709.40 221255.32 00:21:05.465 00:21:05.465 02:37:45 -- host/perf.sh@66 -- # sync 00:21:05.465 02:37:46 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.724 02:37:46 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:05.724 02:37:46 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:21:05.724 02:37:46 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:05.983 02:37:46 -- host/perf.sh@72 -- # ls_guid=ccebd280-9bcd-4d9d-8372-ae9ce0145574 00:21:05.983 02:37:46 -- host/perf.sh@73 -- # get_lvs_free_mb ccebd280-9bcd-4d9d-8372-ae9ce0145574 00:21:05.983 02:37:46 -- common/autotest_common.sh@1353 -- # local lvs_uuid=ccebd280-9bcd-4d9d-8372-ae9ce0145574 00:21:05.983 02:37:46 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:05.983 02:37:46 -- common/autotest_common.sh@1355 -- # local fc 00:21:05.983 02:37:46 -- common/autotest_common.sh@1356 -- # local cs 00:21:05.983 02:37:46 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:06.242 02:37:46 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:06.242 { 00:21:06.242 "base_bdev": "Nvme0n1", 00:21:06.242 "block_size": 4096, 00:21:06.242 "cluster_size": 4194304, 00:21:06.242 "free_clusters": 1278, 00:21:06.242 "name": "lvs_0", 00:21:06.242 "total_data_clusters": 1278, 00:21:06.242 "uuid": "ccebd280-9bcd-4d9d-8372-ae9ce0145574" 00:21:06.242 } 00:21:06.242 ]' 00:21:06.242 02:37:46 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="ccebd280-9bcd-4d9d-8372-ae9ce0145574") .free_clusters' 00:21:06.501 02:37:46 -- common/autotest_common.sh@1358 -- # fc=1278 00:21:06.501 02:37:46 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="ccebd280-9bcd-4d9d-8372-ae9ce0145574") .cluster_size' 00:21:06.501 5112 00:21:06.501 02:37:46 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:06.501 02:37:46 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:21:06.501 02:37:46 -- common/autotest_common.sh@1363 -- # echo 5112 00:21:06.501 02:37:46 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:06.501 02:37:46 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ccebd280-9bcd-4d9d-8372-ae9ce0145574 lbd_0 5112 00:21:06.760 02:37:47 -- host/perf.sh@80 -- # lb_guid=f8dbd91e-218f-4978-a8b7-74c1d64fc412 00:21:06.760 02:37:47 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore f8dbd91e-218f-4978-a8b7-74c1d64fc412 lvs_n_0 00:21:07.018 02:37:47 -- host/perf.sh@83 -- # ls_nested_guid=6e58f652-eb2e-4ffd-8f22-14688a48b7f2 00:21:07.018 02:37:47 -- host/perf.sh@84 -- # get_lvs_free_mb 6e58f652-eb2e-4ffd-8f22-14688a48b7f2 00:21:07.018 02:37:47 -- common/autotest_common.sh@1353 -- # local lvs_uuid=6e58f652-eb2e-4ffd-8f22-14688a48b7f2 00:21:07.018 02:37:47 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:07.018 02:37:47 -- common/autotest_common.sh@1355 -- # local fc 00:21:07.018 02:37:47 -- common/autotest_common.sh@1356 -- # local cs 00:21:07.018 02:37:47 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:07.586 02:37:47 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:07.586 { 00:21:07.586 "base_bdev": "Nvme0n1", 00:21:07.586 "block_size": 4096, 00:21:07.586 "cluster_size": 4194304, 00:21:07.586 "free_clusters": 0, 00:21:07.586 "name": "lvs_0", 00:21:07.586 "total_data_clusters": 1278, 00:21:07.586 "uuid": "ccebd280-9bcd-4d9d-8372-ae9ce0145574" 00:21:07.586 }, 00:21:07.586 { 00:21:07.586 "base_bdev": "f8dbd91e-218f-4978-a8b7-74c1d64fc412", 00:21:07.586 "block_size": 4096, 00:21:07.586 "cluster_size": 4194304, 00:21:07.586 "free_clusters": 1276, 00:21:07.586 "name": "lvs_n_0", 00:21:07.586 "total_data_clusters": 1276, 00:21:07.586 "uuid": "6e58f652-eb2e-4ffd-8f22-14688a48b7f2" 00:21:07.586 } 00:21:07.586 ]' 00:21:07.587 02:37:47 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="6e58f652-eb2e-4ffd-8f22-14688a48b7f2") .free_clusters' 00:21:07.587 02:37:47 -- common/autotest_common.sh@1358 -- # fc=1276 00:21:07.587 02:37:47 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="6e58f652-eb2e-4ffd-8f22-14688a48b7f2") .cluster_size' 00:21:07.587 02:37:48 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:07.587 5104 00:21:07.587 02:37:48 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:21:07.587 02:37:48 -- common/autotest_common.sh@1363 -- # echo 5104 00:21:07.587 02:37:48 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:07.587 02:37:48 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6e58f652-eb2e-4ffd-8f22-14688a48b7f2 lbd_nest_0 5104 00:21:07.845 02:37:48 -- host/perf.sh@88 -- # lb_nested_guid=4640fe09-f3e7-4095-b626-cae4f1187676 00:21:07.845 02:37:48 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.104 02:37:48 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:08.104 02:37:48 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4640fe09-f3e7-4095-b626-cae4f1187676 00:21:08.363 02:37:48 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.363 02:37:48 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:08.364 02:37:48 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:08.364 02:37:48 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:08.364 02:37:48 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:08.364 02:37:48 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.931 No valid NVMe controllers or AIO or URING devices found 00:21:08.931 Initializing NVMe Controllers 00:21:08.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.931 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:08.931 WARNING: Some requested NVMe devices were skipped 00:21:08.931 02:37:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:08.931 02:37:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.909 Initializing NVMe Controllers 00:21:18.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.909 Initialization complete. Launching workers. 00:21:18.909 ======================================================== 00:21:18.909 Latency(us) 00:21:18.909 Device Information : IOPS MiB/s Average min max 00:21:18.909 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 873.60 109.20 1144.35 376.81 7722.60 00:21:18.909 ======================================================== 00:21:18.909 Total : 873.60 109.20 1144.35 376.81 7722.60 00:21:18.909 00:21:19.168 02:37:59 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:19.168 02:37:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.168 02:37:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:19.426 No valid NVMe controllers or AIO or URING devices found 00:21:19.426 Initializing NVMe Controllers 00:21:19.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.426 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:19.426 WARNING: Some requested NVMe devices were skipped 00:21:19.426 02:37:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:19.426 02:37:59 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:31.637 Initializing NVMe Controllers 00:21:31.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.637 Initialization complete. Launching workers. 00:21:31.637 ======================================================== 00:21:31.637 Latency(us) 00:21:31.637 Device Information : IOPS MiB/s Average min max 00:21:31.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 984.50 123.06 32563.02 7964.67 289824.56 00:21:31.637 ======================================================== 00:21:31.637 Total : 984.50 123.06 32563.02 7964.67 289824.56 00:21:31.637 00:21:31.637 02:38:10 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:31.637 02:38:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:31.637 02:38:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:31.637 No valid NVMe controllers or AIO or URING devices found 00:21:31.637 Initializing NVMe Controllers 00:21:31.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.637 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:31.637 WARNING: Some requested NVMe devices were skipped 00:21:31.637 02:38:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:31.637 02:38:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:41.629 Initializing NVMe Controllers 00:21:41.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:41.629 Controller IO queue size 128, less than required. 00:21:41.629 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:41.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:41.629 Initialization complete. Launching workers. 00:21:41.629 ======================================================== 00:21:41.629 Latency(us) 00:21:41.629 Device Information : IOPS MiB/s Average min max 00:21:41.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3747.48 468.43 34222.12 11684.62 62895.33 00:21:41.629 ======================================================== 00:21:41.629 Total : 3747.48 468.43 34222.12 11684.62 62895.33 00:21:41.629 00:21:41.629 02:38:20 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.629 02:38:21 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4640fe09-f3e7-4095-b626-cae4f1187676 00:21:41.629 02:38:21 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:41.629 02:38:21 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f8dbd91e-218f-4978-a8b7-74c1d64fc412 00:21:41.629 02:38:22 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:41.629 02:38:22 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:41.629 02:38:22 -- host/perf.sh@114 -- # nvmftestfini 00:21:41.629 02:38:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:41.629 02:38:22 -- nvmf/common.sh@116 -- # sync 00:21:41.629 02:38:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:41.629 02:38:22 -- nvmf/common.sh@119 -- # set +e 00:21:41.629 02:38:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:41.629 02:38:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:41.629 rmmod nvme_tcp 00:21:41.629 rmmod nvme_fabrics 00:21:41.888 rmmod nvme_keyring 00:21:41.888 02:38:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:41.888 02:38:22 -- nvmf/common.sh@123 -- # set -e 00:21:41.888 02:38:22 -- nvmf/common.sh@124 -- # return 0 00:21:41.888 02:38:22 -- nvmf/common.sh@477 -- # '[' -n 83152 ']' 00:21:41.888 02:38:22 -- nvmf/common.sh@478 -- # killprocess 83152 00:21:41.888 02:38:22 -- common/autotest_common.sh@936 -- # '[' -z 83152 ']' 00:21:41.888 02:38:22 -- common/autotest_common.sh@940 -- # kill -0 83152 00:21:41.888 02:38:22 -- common/autotest_common.sh@941 -- # uname 00:21:41.888 02:38:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.888 02:38:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83152 00:21:41.888 killing process with pid 83152 00:21:41.888 02:38:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:41.888 02:38:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:41.888 02:38:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83152' 00:21:41.888 02:38:22 -- common/autotest_common.sh@955 -- # kill 83152 00:21:41.888 02:38:22 -- common/autotest_common.sh@960 -- # wait 83152 00:21:43.267 02:38:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:43.267 02:38:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:43.267 02:38:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:43.267 02:38:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.267 02:38:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:43.267 02:38:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.267 02:38:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.267 02:38:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.267 02:38:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:43.267 00:21:43.267 real 0m50.915s 00:21:43.267 user 3m11.057s 00:21:43.267 sys 0m10.630s 00:21:43.267 02:38:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:43.267 02:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:43.267 ************************************ 00:21:43.267 END TEST nvmf_perf 00:21:43.267 ************************************ 00:21:43.526 02:38:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:43.526 02:38:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:43.526 02:38:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:43.526 02:38:23 -- common/autotest_common.sh@10 -- # set +x 00:21:43.526 ************************************ 00:21:43.526 START TEST nvmf_fio_host 00:21:43.526 ************************************ 00:21:43.526 02:38:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:43.527 * Looking for test storage... 00:21:43.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:43.527 02:38:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:43.527 02:38:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:43.527 02:38:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:43.527 02:38:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:43.527 02:38:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:43.527 02:38:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:43.527 02:38:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:43.527 02:38:24 -- scripts/common.sh@335 -- # IFS=.-: 00:21:43.527 02:38:24 -- scripts/common.sh@335 -- # read -ra ver1 00:21:43.527 02:38:24 -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.527 02:38:24 -- scripts/common.sh@336 -- # read -ra ver2 00:21:43.527 02:38:24 -- scripts/common.sh@337 -- # local 'op=<' 00:21:43.527 02:38:24 -- scripts/common.sh@339 -- # ver1_l=2 00:21:43.527 02:38:24 -- scripts/common.sh@340 -- # ver2_l=1 00:21:43.527 02:38:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:43.527 02:38:24 -- scripts/common.sh@343 -- # case "$op" in 00:21:43.527 02:38:24 -- scripts/common.sh@344 -- # : 1 00:21:43.527 02:38:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:43.527 02:38:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.527 02:38:24 -- scripts/common.sh@364 -- # decimal 1 00:21:43.527 02:38:24 -- scripts/common.sh@352 -- # local d=1 00:21:43.527 02:38:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.527 02:38:24 -- scripts/common.sh@354 -- # echo 1 00:21:43.527 02:38:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:43.527 02:38:24 -- scripts/common.sh@365 -- # decimal 2 00:21:43.527 02:38:24 -- scripts/common.sh@352 -- # local d=2 00:21:43.527 02:38:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.527 02:38:24 -- scripts/common.sh@354 -- # echo 2 00:21:43.527 02:38:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:43.527 02:38:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:43.527 02:38:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:43.527 02:38:24 -- scripts/common.sh@367 -- # return 0 00:21:43.527 02:38:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.527 02:38:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:43.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.527 --rc genhtml_branch_coverage=1 00:21:43.527 --rc genhtml_function_coverage=1 00:21:43.527 --rc genhtml_legend=1 00:21:43.527 --rc geninfo_all_blocks=1 00:21:43.527 --rc geninfo_unexecuted_blocks=1 00:21:43.527 00:21:43.527 ' 00:21:43.527 02:38:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:43.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.527 --rc genhtml_branch_coverage=1 00:21:43.527 --rc genhtml_function_coverage=1 00:21:43.527 --rc genhtml_legend=1 00:21:43.527 --rc geninfo_all_blocks=1 00:21:43.527 --rc geninfo_unexecuted_blocks=1 00:21:43.527 00:21:43.527 ' 00:21:43.527 02:38:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:43.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.527 --rc genhtml_branch_coverage=1 00:21:43.527 --rc genhtml_function_coverage=1 00:21:43.527 --rc genhtml_legend=1 00:21:43.527 --rc geninfo_all_blocks=1 00:21:43.527 --rc geninfo_unexecuted_blocks=1 00:21:43.527 00:21:43.527 ' 00:21:43.527 02:38:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:43.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.527 --rc genhtml_branch_coverage=1 00:21:43.527 --rc genhtml_function_coverage=1 00:21:43.527 --rc genhtml_legend=1 00:21:43.527 --rc geninfo_all_blocks=1 00:21:43.527 --rc geninfo_unexecuted_blocks=1 00:21:43.527 00:21:43.527 ' 00:21:43.527 02:38:24 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.527 02:38:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.527 02:38:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.527 02:38:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.527 02:38:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- paths/export.sh@5 -- # export PATH 00:21:43.527 02:38:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.527 02:38:24 -- nvmf/common.sh@7 -- # uname -s 00:21:43.527 02:38:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.527 02:38:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.527 02:38:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.527 02:38:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.527 02:38:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.527 02:38:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.527 02:38:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.527 02:38:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.527 02:38:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.527 02:38:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.527 02:38:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:21:43.527 02:38:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:21:43.527 02:38:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.527 02:38:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.527 02:38:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.527 02:38:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.527 02:38:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.527 02:38:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.527 02:38:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.527 02:38:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- paths/export.sh@5 -- # export PATH 00:21:43.527 02:38:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.527 02:38:24 -- nvmf/common.sh@46 -- # : 0 00:21:43.527 02:38:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:43.527 02:38:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:43.527 02:38:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:43.527 02:38:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.527 02:38:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.527 02:38:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:43.527 02:38:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:43.527 02:38:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:43.527 02:38:24 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.527 02:38:24 -- host/fio.sh@14 -- # nvmftestinit 00:21:43.527 02:38:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:43.527 02:38:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.527 02:38:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:43.527 02:38:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:43.527 02:38:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:43.527 02:38:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.528 02:38:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.528 02:38:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.528 02:38:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:43.528 02:38:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:43.528 02:38:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:43.528 02:38:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:43.528 02:38:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:43.528 02:38:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:43.528 02:38:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.528 02:38:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.528 02:38:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:43.528 02:38:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:43.528 02:38:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.528 02:38:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.528 02:38:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.528 02:38:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.528 02:38:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.528 02:38:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.528 02:38:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.528 02:38:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.528 02:38:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:43.786 02:38:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:43.786 Cannot find device "nvmf_tgt_br" 00:21:43.786 02:38:24 -- nvmf/common.sh@154 -- # true 00:21:43.786 02:38:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.786 Cannot find device "nvmf_tgt_br2" 00:21:43.786 02:38:24 -- nvmf/common.sh@155 -- # true 00:21:43.786 02:38:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:43.786 02:38:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:43.786 Cannot find device "nvmf_tgt_br" 00:21:43.786 02:38:24 -- nvmf/common.sh@157 -- # true 00:21:43.786 02:38:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:43.786 Cannot find device "nvmf_tgt_br2" 00:21:43.786 02:38:24 -- nvmf/common.sh@158 -- # true 00:21:43.786 02:38:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:43.786 02:38:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:43.786 02:38:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.786 02:38:24 -- nvmf/common.sh@161 -- # true 00:21:43.786 02:38:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.786 02:38:24 -- nvmf/common.sh@162 -- # true 00:21:43.786 02:38:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.786 02:38:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.786 02:38:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.786 02:38:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.786 02:38:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.786 02:38:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.786 02:38:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.786 02:38:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:43.786 02:38:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:43.786 02:38:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:43.786 02:38:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:43.786 02:38:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:43.786 02:38:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:43.786 02:38:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.786 02:38:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.786 02:38:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.786 02:38:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:44.045 02:38:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:44.045 02:38:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.045 02:38:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.045 02:38:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.045 02:38:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.045 02:38:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.045 02:38:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:44.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:44.045 00:21:44.045 --- 10.0.0.2 ping statistics --- 00:21:44.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.045 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:44.045 02:38:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:44.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:21:44.045 00:21:44.045 --- 10.0.0.3 ping statistics --- 00:21:44.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.045 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:44.045 02:38:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:44.045 00:21:44.045 --- 10.0.0.1 ping statistics --- 00:21:44.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.045 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:44.045 02:38:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.045 02:38:24 -- nvmf/common.sh@421 -- # return 0 00:21:44.045 02:38:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:44.045 02:38:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.045 02:38:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:44.045 02:38:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:44.045 02:38:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.045 02:38:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:44.045 02:38:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:44.045 02:38:24 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:44.045 02:38:24 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:44.045 02:38:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.045 02:38:24 -- common/autotest_common.sh@10 -- # set +x 00:21:44.045 02:38:24 -- host/fio.sh@24 -- # nvmfpid=84130 00:21:44.045 02:38:24 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:44.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.045 02:38:24 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.045 02:38:24 -- host/fio.sh@28 -- # waitforlisten 84130 00:21:44.045 02:38:24 -- common/autotest_common.sh@829 -- # '[' -z 84130 ']' 00:21:44.045 02:38:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.045 02:38:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.045 02:38:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.045 02:38:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.045 02:38:24 -- common/autotest_common.sh@10 -- # set +x 00:21:44.045 [2024-11-21 02:38:24.564867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:44.045 [2024-11-21 02:38:24.565110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.305 [2024-11-21 02:38:24.698891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.305 [2024-11-21 02:38:24.787383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:44.305 [2024-11-21 02:38:24.787830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.305 [2024-11-21 02:38:24.787887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.305 [2024-11-21 02:38:24.788086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.305 [2024-11-21 02:38:24.788294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.305 [2024-11-21 02:38:24.788395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.305 [2024-11-21 02:38:24.788472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:44.305 [2024-11-21 02:38:24.788472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.242 02:38:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.242 02:38:25 -- common/autotest_common.sh@862 -- # return 0 00:21:45.242 02:38:25 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:45.242 [2024-11-21 02:38:25.734134] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.242 02:38:25 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:45.242 02:38:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.242 02:38:25 -- common/autotest_common.sh@10 -- # set +x 00:21:45.242 02:38:25 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:45.501 Malloc1 00:21:45.501 02:38:26 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.759 02:38:26 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:46.018 02:38:26 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.277 [2024-11-21 02:38:26.670061] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.277 02:38:26 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:46.277 02:38:26 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:46.277 02:38:26 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.277 02:38:26 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.277 02:38:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:46.277 02:38:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:46.277 02:38:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:46.277 02:38:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.277 02:38:26 -- common/autotest_common.sh@1330 -- # shift 00:21:46.277 02:38:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:46.277 02:38:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.277 02:38:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.277 02:38:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:46.277 02:38:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:46.536 02:38:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:46.536 02:38:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:46.536 02:38:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.536 02:38:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.536 02:38:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:46.536 02:38:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:46.536 02:38:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:46.536 02:38:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:46.536 02:38:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:46.536 02:38:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.536 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:46.536 fio-3.35 00:21:46.536 Starting 1 thread 00:21:49.072 00:21:49.072 test: (groupid=0, jobs=1): err= 0: pid=84257: Thu Nov 21 02:38:29 2024 00:21:49.072 read: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(80.3MiB/2006msec) 00:21:49.072 slat (nsec): min=1735, max=272550, avg=2183.68, stdev=2752.66 00:21:49.072 clat (usec): min=2860, max=11449, avg=6625.33, stdev=585.46 00:21:49.072 lat (usec): min=2888, max=11451, avg=6627.51, stdev=585.43 00:21:49.072 clat percentiles (usec): 00:21:49.072 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6194], 00:21:49.072 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:21:49.072 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7373], 95.00th=[ 7635], 00:21:49.072 | 99.00th=[ 8225], 99.50th=[ 8586], 99.90th=[ 9503], 99.95th=[10945], 00:21:49.072 | 99.99th=[11338] 00:21:49.072 bw ( KiB/s): min=40296, max=41776, per=99.96%, avg=40966.00, stdev=617.41, samples=4 00:21:49.072 iops : min=10074, max=10444, avg=10241.50, stdev=154.35, samples=4 00:21:49.072 write: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(80.3MiB/2006msec); 0 zone resets 00:21:49.072 slat (nsec): min=1835, max=208146, avg=2284.09, stdev=2070.20 00:21:49.072 clat (usec): min=2017, max=11064, avg=5822.68, stdev=485.31 00:21:49.072 lat (usec): min=2028, max=11067, avg=5824.97, stdev=485.31 00:21:49.072 clat percentiles (usec): 00:21:49.072 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:21:49.072 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:21:49.072 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6587], 00:21:49.072 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 8979], 99.95th=[10290], 00:21:49.072 | 99.99th=[10945] 00:21:49.072 bw ( KiB/s): min=40128, max=41664, per=100.00%, avg=41000.00, stdev=687.41, samples=4 00:21:49.073 iops : min=10032, max=10416, avg=10250.00, stdev=171.85, samples=4 00:21:49.073 lat (msec) : 4=0.10%, 10=99.83%, 20=0.07% 00:21:49.073 cpu : usr=68.79%, sys=22.73%, ctx=10, majf=0, minf=5 00:21:49.073 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:49.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.073 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:49.073 issued rwts: total=20553,20557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:49.073 00:21:49.073 Run status group 0 (all jobs): 00:21:49.073 READ: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=80.3MiB (84.2MB), run=2006-2006msec 00:21:49.073 WRITE: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=80.3MiB (84.2MB), run=2006-2006msec 00:21:49.073 02:38:29 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:49.073 02:38:29 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:49.073 02:38:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:49.073 02:38:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:49.073 02:38:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:49.073 02:38:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.073 02:38:29 -- common/autotest_common.sh@1330 -- # shift 00:21:49.073 02:38:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:49.073 02:38:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:49.073 02:38:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:49.073 02:38:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:49.073 02:38:29 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:49.073 02:38:29 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:49.073 02:38:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:49.073 02:38:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:49.073 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:49.073 fio-3.35 00:21:49.073 Starting 1 thread 00:21:51.606 00:21:51.606 test: (groupid=0, jobs=1): err= 0: pid=84303: Thu Nov 21 02:38:31 2024 00:21:51.606 read: IOPS=8671, BW=135MiB/s (142MB/s)(272MiB/2007msec) 00:21:51.606 slat (usec): min=2, max=164, avg= 3.49, stdev= 2.61 00:21:51.606 clat (usec): min=2441, max=21304, avg=8807.10, stdev=2332.28 00:21:51.606 lat (usec): min=2444, max=21309, avg=8810.59, stdev=2332.58 00:21:51.606 clat percentiles (usec): 00:21:51.606 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6783], 00:21:51.606 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9241], 00:21:51.606 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11731], 95.00th=[13304], 00:21:51.606 | 99.00th=[15926], 99.50th=[16450], 99.90th=[18220], 99.95th=[18744], 00:21:51.606 | 99.99th=[19530] 00:21:51.606 bw ( KiB/s): min=65632, max=81440, per=51.19%, avg=71024.00, stdev=7078.56, samples=4 00:21:51.606 iops : min= 4102, max= 5090, avg=4439.00, stdev=442.41, samples=4 00:21:51.606 write: IOPS=5099, BW=79.7MiB/s (83.6MB/s)(145MiB/1821msec); 0 zone resets 00:21:51.606 slat (usec): min=29, max=358, avg=35.25, stdev=10.47 00:21:51.606 clat (usec): min=3244, max=22881, avg=10515.43, stdev=2182.06 00:21:51.606 lat (usec): min=3275, max=22928, avg=10550.69, stdev=2185.59 00:21:51.606 clat percentiles (usec): 00:21:51.606 | 1.00th=[ 6718], 5.00th=[ 7832], 10.00th=[ 8225], 20.00th=[ 8717], 00:21:51.606 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10683], 00:21:51.606 | 70.00th=[11207], 80.00th=[11994], 90.00th=[13435], 95.00th=[14615], 00:21:51.606 | 99.00th=[17171], 99.50th=[18220], 99.90th=[21627], 99.95th=[22152], 00:21:51.606 | 99.99th=[22938] 00:21:51.606 bw ( KiB/s): min=69120, max=83552, per=90.45%, avg=73808.00, stdev=6651.82, samples=4 00:21:51.606 iops : min= 4320, max= 5222, avg=4613.00, stdev=415.74, samples=4 00:21:51.606 lat (msec) : 4=0.33%, 10=64.08%, 20=35.50%, 50=0.08% 00:21:51.606 cpu : usr=65.15%, sys=22.63%, ctx=12, majf=0, minf=2 00:21:51.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:51.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.606 issued rwts: total=17403,9287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.606 00:21:51.606 Run status group 0 (all jobs): 00:21:51.606 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=272MiB (285MB), run=2007-2007msec 00:21:51.606 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=145MiB (152MB), run=1821-1821msec 00:21:51.606 02:38:31 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.606 02:38:32 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:51.606 02:38:32 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:51.606 02:38:32 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:51.606 02:38:32 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:51.606 02:38:32 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:51.606 02:38:32 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:51.606 02:38:32 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:51.606 02:38:32 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:51.606 02:38:32 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:51.606 02:38:32 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:51.606 02:38:32 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:51.865 Nvme0n1 00:21:51.865 02:38:32 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:52.124 02:38:32 -- host/fio.sh@53 -- # ls_guid=64f87fef-fee8-43fa-8dea-6c68a8c9beda 00:21:52.124 02:38:32 -- host/fio.sh@54 -- # get_lvs_free_mb 64f87fef-fee8-43fa-8dea-6c68a8c9beda 00:21:52.124 02:38:32 -- common/autotest_common.sh@1353 -- # local lvs_uuid=64f87fef-fee8-43fa-8dea-6c68a8c9beda 00:21:52.124 02:38:32 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:52.124 02:38:32 -- common/autotest_common.sh@1355 -- # local fc 00:21:52.124 02:38:32 -- common/autotest_common.sh@1356 -- # local cs 00:21:52.124 02:38:32 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:52.383 02:38:32 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:52.383 { 00:21:52.383 "base_bdev": "Nvme0n1", 00:21:52.383 "block_size": 4096, 00:21:52.383 "cluster_size": 1073741824, 00:21:52.383 "free_clusters": 4, 00:21:52.383 "name": "lvs_0", 00:21:52.383 "total_data_clusters": 4, 00:21:52.383 "uuid": "64f87fef-fee8-43fa-8dea-6c68a8c9beda" 00:21:52.383 } 00:21:52.383 ]' 00:21:52.383 02:38:32 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="64f87fef-fee8-43fa-8dea-6c68a8c9beda") .free_clusters' 00:21:52.383 02:38:32 -- common/autotest_common.sh@1358 -- # fc=4 00:21:52.383 02:38:32 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="64f87fef-fee8-43fa-8dea-6c68a8c9beda") .cluster_size' 00:21:52.383 4096 00:21:52.383 02:38:32 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:52.383 02:38:32 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:52.383 02:38:32 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:52.383 02:38:32 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:52.641 5367666c-2f76-4414-b377-2d85ea0c7525 00:21:52.641 02:38:33 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:53.209 02:38:33 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:53.209 02:38:33 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:53.468 02:38:33 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.468 02:38:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.468 02:38:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:53.468 02:38:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.468 02:38:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:53.468 02:38:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.468 02:38:33 -- common/autotest_common.sh@1330 -- # shift 00:21:53.468 02:38:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:53.468 02:38:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.468 02:38:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.468 02:38:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:53.468 02:38:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:53.468 02:38:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:53.468 02:38:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:53.468 02:38:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.468 02:38:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:53.468 02:38:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:53.468 02:38:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:53.468 02:38:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:53.468 02:38:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:53.468 02:38:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:53.468 02:38:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.727 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:53.727 fio-3.35 00:21:53.727 Starting 1 thread 00:21:56.261 00:21:56.261 test: (groupid=0, jobs=1): err= 0: pid=84460: Thu Nov 21 02:38:36 2024 00:21:56.261 read: IOPS=6411, BW=25.0MiB/s (26.3MB/s)(50.3MiB/2009msec) 00:21:56.261 slat (nsec): min=1775, max=420649, avg=2955.38, stdev=5128.81 00:21:56.261 clat (usec): min=4569, max=18299, avg=10622.65, stdev=1001.29 00:21:56.261 lat (usec): min=4578, max=18302, avg=10625.60, stdev=1001.06 00:21:56.261 clat percentiles (usec): 00:21:56.261 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:21:56.261 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:21:56.261 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:21:56.261 | 99.00th=[12911], 99.50th=[13304], 99.90th=[16909], 99.95th=[17695], 00:21:56.261 | 99.99th=[18220] 00:21:56.261 bw ( KiB/s): min=24391, max=26256, per=99.90%, avg=25619.75, stdev=841.63, samples=4 00:21:56.261 iops : min= 6097, max= 6564, avg=6404.75, stdev=210.77, samples=4 00:21:56.261 write: IOPS=6414, BW=25.1MiB/s (26.3MB/s)(50.3MiB/2009msec); 0 zone resets 00:21:56.262 slat (nsec): min=1892, max=261685, avg=3096.00, stdev=3671.42 00:21:56.262 clat (usec): min=2615, max=18423, avg=9267.67, stdev=884.44 00:21:56.262 lat (usec): min=2627, max=18426, avg=9270.77, stdev=884.33 00:21:56.262 clat percentiles (usec): 00:21:56.262 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8586], 00:21:56.262 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9503], 00:21:56.262 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:21:56.262 | 99.00th=[11076], 99.50th=[11469], 99.90th=[15664], 99.95th=[16712], 00:21:56.262 | 99.99th=[18220] 00:21:56.262 bw ( KiB/s): min=25392, max=25896, per=99.91%, avg=25635.00, stdev=222.22, samples=4 00:21:56.262 iops : min= 6348, max= 6474, avg=6408.75, stdev=55.55, samples=4 00:21:56.262 lat (msec) : 4=0.04%, 10=54.37%, 20=45.59% 00:21:56.262 cpu : usr=70.27%, sys=22.01%, ctx=8, majf=0, minf=5 00:21:56.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:56.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:56.262 issued rwts: total=12880,12887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:56.262 00:21:56.262 Run status group 0 (all jobs): 00:21:56.262 READ: bw=25.0MiB/s (26.3MB/s), 25.0MiB/s-25.0MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.8MB), run=2009-2009msec 00:21:56.262 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.8MB), run=2009-2009msec 00:21:56.262 02:38:36 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:56.262 02:38:36 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:56.549 02:38:36 -- host/fio.sh@64 -- # ls_nested_guid=b2cf1786-30be-46d0-acdc-857f7d68d8ca 00:21:56.549 02:38:36 -- host/fio.sh@65 -- # get_lvs_free_mb b2cf1786-30be-46d0-acdc-857f7d68d8ca 00:21:56.549 02:38:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b2cf1786-30be-46d0-acdc-857f7d68d8ca 00:21:56.549 02:38:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:56.549 02:38:36 -- common/autotest_common.sh@1355 -- # local fc 00:21:56.549 02:38:36 -- common/autotest_common.sh@1356 -- # local cs 00:21:56.549 02:38:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:56.848 02:38:37 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:56.848 { 00:21:56.848 "base_bdev": "Nvme0n1", 00:21:56.848 "block_size": 4096, 00:21:56.848 "cluster_size": 1073741824, 00:21:56.848 "free_clusters": 0, 00:21:56.848 "name": "lvs_0", 00:21:56.848 "total_data_clusters": 4, 00:21:56.848 "uuid": "64f87fef-fee8-43fa-8dea-6c68a8c9beda" 00:21:56.848 }, 00:21:56.848 { 00:21:56.848 "base_bdev": "5367666c-2f76-4414-b377-2d85ea0c7525", 00:21:56.848 "block_size": 4096, 00:21:56.848 "cluster_size": 4194304, 00:21:56.848 "free_clusters": 1022, 00:21:56.848 "name": "lvs_n_0", 00:21:56.848 "total_data_clusters": 1022, 00:21:56.848 "uuid": "b2cf1786-30be-46d0-acdc-857f7d68d8ca" 00:21:56.848 } 00:21:56.848 ]' 00:21:56.848 02:38:37 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b2cf1786-30be-46d0-acdc-857f7d68d8ca") .free_clusters' 00:21:56.848 02:38:37 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:56.848 02:38:37 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b2cf1786-30be-46d0-acdc-857f7d68d8ca") .cluster_size' 00:21:56.848 4088 00:21:56.848 02:38:37 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:56.848 02:38:37 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:56.848 02:38:37 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:56.848 02:38:37 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:57.124 8b6f496f-ccf7-42b8-a940-3e9accc35d62 00:21:57.124 02:38:37 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:57.396 02:38:37 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:57.396 02:38:38 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:57.654 02:38:38 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:57.654 02:38:38 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:57.654 02:38:38 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:57.654 02:38:38 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:57.654 02:38:38 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:57.654 02:38:38 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:57.654 02:38:38 -- common/autotest_common.sh@1330 -- # shift 00:21:57.654 02:38:38 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:57.654 02:38:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:57.654 02:38:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:57.654 02:38:38 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:57.912 02:38:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:57.912 02:38:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:57.912 02:38:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:57.912 02:38:38 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:57.912 02:38:38 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:57.912 02:38:38 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:57.912 02:38:38 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:57.912 02:38:38 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:57.912 02:38:38 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:57.912 02:38:38 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:57.912 02:38:38 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:57.912 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:57.912 fio-3.35 00:21:57.912 Starting 1 thread 00:22:00.447 00:22:00.447 test: (groupid=0, jobs=1): err= 0: pid=84576: Thu Nov 21 02:38:40 2024 00:22:00.447 read: IOPS=6485, BW=25.3MiB/s (26.6MB/s)(51.9MiB/2048msec) 00:22:00.447 slat (nsec): min=1770, max=338353, avg=2739.49, stdev=4258.08 00:22:00.447 clat (usec): min=4472, max=59354, avg=10616.11, stdev=3288.41 00:22:00.447 lat (usec): min=4481, max=59357, avg=10618.85, stdev=3288.34 00:22:00.447 clat percentiles (usec): 00:22:00.447 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:22:00.447 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:22:00.447 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:22:00.447 | 99.00th=[13173], 99.50th=[49021], 99.90th=[55837], 99.95th=[56886], 00:22:00.447 | 99.99th=[58983] 00:22:00.447 bw ( KiB/s): min=25400, max=27304, per=100.00%, avg=26424.00, stdev=788.69, samples=4 00:22:00.447 iops : min= 6350, max= 6826, avg=6606.00, stdev=197.17, samples=4 00:22:00.447 write: IOPS=6493, BW=25.4MiB/s (26.6MB/s)(51.9MiB/2048msec); 0 zone resets 00:22:00.447 slat (nsec): min=1844, max=284937, avg=2890.59, stdev=3602.16 00:22:00.447 clat (usec): min=2336, max=55681, avg=9022.59, stdev=2914.83 00:22:00.447 lat (usec): min=2347, max=55683, avg=9025.48, stdev=2914.79 00:22:00.447 clat percentiles (usec): 00:22:00.447 | 1.00th=[ 6849], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8160], 00:22:00.447 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:22:00.447 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10290], 00:22:00.447 | 99.00th=[11207], 99.50th=[12780], 99.90th=[53216], 99.95th=[55313], 00:22:00.447 | 99.99th=[55837] 00:22:00.447 bw ( KiB/s): min=25920, max=26944, per=100.00%, avg=26486.00, stdev=422.99, samples=4 00:22:00.447 iops : min= 6480, max= 6736, avg=6621.50, stdev=105.75, samples=4 00:22:00.447 lat (msec) : 4=0.04%, 10=63.88%, 20=35.60%, 50=0.11%, 100=0.37% 00:22:00.447 cpu : usr=70.49%, sys=21.93%, ctx=9, majf=0, minf=5 00:22:00.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:00.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:00.447 issued rwts: total=13283,13299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:00.447 00:22:00.447 Run status group 0 (all jobs): 00:22:00.447 READ: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=51.9MiB (54.4MB), run=2048-2048msec 00:22:00.447 WRITE: bw=25.4MiB/s (26.6MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.6MB/s), io=51.9MiB (54.5MB), run=2048-2048msec 00:22:00.447 02:38:40 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:00.447 02:38:41 -- host/fio.sh@74 -- # sync 00:22:00.706 02:38:41 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:00.964 02:38:41 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:01.222 02:38:41 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:01.222 02:38:41 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:01.480 02:38:42 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:02.045 02:38:42 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:02.045 02:38:42 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:02.045 02:38:42 -- host/fio.sh@86 -- # nvmftestfini 00:22:02.045 02:38:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:02.045 02:38:42 -- nvmf/common.sh@116 -- # sync 00:22:02.045 02:38:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:02.045 02:38:42 -- nvmf/common.sh@119 -- # set +e 00:22:02.045 02:38:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:02.045 02:38:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:02.045 rmmod nvme_tcp 00:22:02.045 rmmod nvme_fabrics 00:22:02.045 rmmod nvme_keyring 00:22:02.045 02:38:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:02.045 02:38:42 -- nvmf/common.sh@123 -- # set -e 00:22:02.045 02:38:42 -- nvmf/common.sh@124 -- # return 0 00:22:02.045 02:38:42 -- nvmf/common.sh@477 -- # '[' -n 84130 ']' 00:22:02.045 02:38:42 -- nvmf/common.sh@478 -- # killprocess 84130 00:22:02.045 02:38:42 -- common/autotest_common.sh@936 -- # '[' -z 84130 ']' 00:22:02.045 02:38:42 -- common/autotest_common.sh@940 -- # kill -0 84130 00:22:02.045 02:38:42 -- common/autotest_common.sh@941 -- # uname 00:22:02.045 02:38:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:02.045 02:38:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84130 00:22:02.045 killing process with pid 84130 00:22:02.045 02:38:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:02.045 02:38:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:02.045 02:38:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84130' 00:22:02.045 02:38:42 -- common/autotest_common.sh@955 -- # kill 84130 00:22:02.045 02:38:42 -- common/autotest_common.sh@960 -- # wait 84130 00:22:02.303 02:38:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:02.303 02:38:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:02.303 02:38:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:02.303 02:38:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.303 02:38:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:02.303 02:38:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.303 02:38:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.303 02:38:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.303 02:38:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:02.303 00:22:02.303 real 0m18.957s 00:22:02.303 user 1m22.559s 00:22:02.303 sys 0m4.488s 00:22:02.303 02:38:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:02.303 02:38:42 -- common/autotest_common.sh@10 -- # set +x 00:22:02.303 ************************************ 00:22:02.303 END TEST nvmf_fio_host 00:22:02.303 ************************************ 00:22:02.303 02:38:42 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:02.303 02:38:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:02.303 02:38:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:02.303 02:38:42 -- common/autotest_common.sh@10 -- # set +x 00:22:02.303 ************************************ 00:22:02.303 START TEST nvmf_failover 00:22:02.303 ************************************ 00:22:02.303 02:38:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:02.561 * Looking for test storage... 00:22:02.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:02.561 02:38:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:02.561 02:38:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:02.561 02:38:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:02.561 02:38:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:02.561 02:38:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:02.561 02:38:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:02.561 02:38:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:02.561 02:38:43 -- scripts/common.sh@335 -- # IFS=.-: 00:22:02.561 02:38:43 -- scripts/common.sh@335 -- # read -ra ver1 00:22:02.561 02:38:43 -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.561 02:38:43 -- scripts/common.sh@336 -- # read -ra ver2 00:22:02.561 02:38:43 -- scripts/common.sh@337 -- # local 'op=<' 00:22:02.561 02:38:43 -- scripts/common.sh@339 -- # ver1_l=2 00:22:02.561 02:38:43 -- scripts/common.sh@340 -- # ver2_l=1 00:22:02.561 02:38:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:02.562 02:38:43 -- scripts/common.sh@343 -- # case "$op" in 00:22:02.562 02:38:43 -- scripts/common.sh@344 -- # : 1 00:22:02.562 02:38:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:02.562 02:38:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.562 02:38:43 -- scripts/common.sh@364 -- # decimal 1 00:22:02.562 02:38:43 -- scripts/common.sh@352 -- # local d=1 00:22:02.562 02:38:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.562 02:38:43 -- scripts/common.sh@354 -- # echo 1 00:22:02.562 02:38:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:02.562 02:38:43 -- scripts/common.sh@365 -- # decimal 2 00:22:02.562 02:38:43 -- scripts/common.sh@352 -- # local d=2 00:22:02.562 02:38:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.562 02:38:43 -- scripts/common.sh@354 -- # echo 2 00:22:02.562 02:38:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:02.562 02:38:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:02.562 02:38:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:02.562 02:38:43 -- scripts/common.sh@367 -- # return 0 00:22:02.562 02:38:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.562 02:38:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.562 --rc genhtml_branch_coverage=1 00:22:02.562 --rc genhtml_function_coverage=1 00:22:02.562 --rc genhtml_legend=1 00:22:02.562 --rc geninfo_all_blocks=1 00:22:02.562 --rc geninfo_unexecuted_blocks=1 00:22:02.562 00:22:02.562 ' 00:22:02.562 02:38:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.562 --rc genhtml_branch_coverage=1 00:22:02.562 --rc genhtml_function_coverage=1 00:22:02.562 --rc genhtml_legend=1 00:22:02.562 --rc geninfo_all_blocks=1 00:22:02.562 --rc geninfo_unexecuted_blocks=1 00:22:02.562 00:22:02.562 ' 00:22:02.562 02:38:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.562 --rc genhtml_branch_coverage=1 00:22:02.562 --rc genhtml_function_coverage=1 00:22:02.562 --rc genhtml_legend=1 00:22:02.562 --rc geninfo_all_blocks=1 00:22:02.562 --rc geninfo_unexecuted_blocks=1 00:22:02.562 00:22:02.562 ' 00:22:02.562 02:38:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:02.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.562 --rc genhtml_branch_coverage=1 00:22:02.562 --rc genhtml_function_coverage=1 00:22:02.562 --rc genhtml_legend=1 00:22:02.562 --rc geninfo_all_blocks=1 00:22:02.562 --rc geninfo_unexecuted_blocks=1 00:22:02.562 00:22:02.562 ' 00:22:02.562 02:38:43 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:02.562 02:38:43 -- nvmf/common.sh@7 -- # uname -s 00:22:02.562 02:38:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.562 02:38:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.562 02:38:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.562 02:38:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.562 02:38:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.562 02:38:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.562 02:38:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.562 02:38:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.562 02:38:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.562 02:38:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.562 02:38:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:22:02.562 02:38:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:22:02.562 02:38:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.562 02:38:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.562 02:38:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:02.562 02:38:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:02.562 02:38:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.562 02:38:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.562 02:38:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.562 02:38:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.562 02:38:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.562 02:38:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.562 02:38:43 -- paths/export.sh@5 -- # export PATH 00:22:02.562 02:38:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.562 02:38:43 -- nvmf/common.sh@46 -- # : 0 00:22:02.562 02:38:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:02.562 02:38:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:02.562 02:38:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:02.562 02:38:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.562 02:38:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.562 02:38:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:02.562 02:38:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:02.562 02:38:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:02.562 02:38:43 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.562 02:38:43 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.562 02:38:43 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:02.562 02:38:43 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.562 02:38:43 -- host/failover.sh@18 -- # nvmftestinit 00:22:02.562 02:38:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:02.562 02:38:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.562 02:38:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:02.562 02:38:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:02.562 02:38:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:02.562 02:38:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.562 02:38:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.562 02:38:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.562 02:38:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:02.562 02:38:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:02.562 02:38:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:02.562 02:38:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:02.562 02:38:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:02.562 02:38:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:02.562 02:38:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.562 02:38:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.562 02:38:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:02.562 02:38:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:02.562 02:38:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:02.563 02:38:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:02.563 02:38:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:02.563 02:38:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.563 02:38:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:02.563 02:38:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:02.563 02:38:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:02.563 02:38:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:02.563 02:38:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:02.563 02:38:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:02.563 Cannot find device "nvmf_tgt_br" 00:22:02.563 02:38:43 -- nvmf/common.sh@154 -- # true 00:22:02.563 02:38:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:02.563 Cannot find device "nvmf_tgt_br2" 00:22:02.563 02:38:43 -- nvmf/common.sh@155 -- # true 00:22:02.563 02:38:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:02.563 02:38:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:02.563 Cannot find device "nvmf_tgt_br" 00:22:02.563 02:38:43 -- nvmf/common.sh@157 -- # true 00:22:02.563 02:38:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:02.819 Cannot find device "nvmf_tgt_br2" 00:22:02.819 02:38:43 -- nvmf/common.sh@158 -- # true 00:22:02.819 02:38:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:02.819 02:38:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:02.819 02:38:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:02.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.819 02:38:43 -- nvmf/common.sh@161 -- # true 00:22:02.819 02:38:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:02.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:02.819 02:38:43 -- nvmf/common.sh@162 -- # true 00:22:02.819 02:38:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:02.819 02:38:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:02.819 02:38:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:02.819 02:38:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:02.820 02:38:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:02.820 02:38:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:02.820 02:38:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:02.820 02:38:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:02.820 02:38:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:02.820 02:38:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:02.820 02:38:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:02.820 02:38:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:02.820 02:38:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:02.820 02:38:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:02.820 02:38:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:02.820 02:38:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:02.820 02:38:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:02.820 02:38:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:02.820 02:38:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:02.820 02:38:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:02.820 02:38:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.820 02:38:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.820 02:38:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.820 02:38:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:02.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:02.820 00:22:02.820 --- 10.0.0.2 ping statistics --- 00:22:02.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.820 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:02.820 02:38:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:02.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:22:02.820 00:22:02.820 --- 10.0.0.3 ping statistics --- 00:22:02.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.820 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:02.820 02:38:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:02.820 00:22:02.820 --- 10.0.0.1 ping statistics --- 00:22:02.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.820 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:02.820 02:38:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.820 02:38:43 -- nvmf/common.sh@421 -- # return 0 00:22:02.820 02:38:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:02.820 02:38:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.820 02:38:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:02.820 02:38:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:02.820 02:38:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.820 02:38:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:02.820 02:38:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:02.820 02:38:43 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:02.820 02:38:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:02.820 02:38:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.820 02:38:43 -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 02:38:43 -- nvmf/common.sh@469 -- # nvmfpid=84858 00:22:02.820 02:38:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:02.820 02:38:43 -- nvmf/common.sh@470 -- # waitforlisten 84858 00:22:02.820 02:38:43 -- common/autotest_common.sh@829 -- # '[' -z 84858 ']' 00:22:02.820 02:38:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.820 02:38:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.820 02:38:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.820 02:38:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.820 02:38:43 -- common/autotest_common.sh@10 -- # set +x 00:22:03.077 [2024-11-21 02:38:43.506676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:03.078 [2024-11-21 02:38:43.506734] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.078 [2024-11-21 02:38:43.643345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.336 [2024-11-21 02:38:43.746805] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:03.336 [2024-11-21 02:38:43.746979] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.336 [2024-11-21 02:38:43.746997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.336 [2024-11-21 02:38:43.747010] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.336 [2024-11-21 02:38:43.747339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.336 [2024-11-21 02:38:43.747673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.336 [2024-11-21 02:38:43.747687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.273 02:38:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:04.273 02:38:44 -- common/autotest_common.sh@862 -- # return 0 00:22:04.273 02:38:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:04.273 02:38:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.273 02:38:44 -- common/autotest_common.sh@10 -- # set +x 00:22:04.273 02:38:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.273 02:38:44 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:04.273 [2024-11-21 02:38:44.844701] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.273 02:38:44 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:04.532 Malloc0 00:22:04.532 02:38:45 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.791 02:38:45 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:05.050 02:38:45 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.309 [2024-11-21 02:38:45.880511] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.309 02:38:45 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:05.568 [2024-11-21 02:38:46.092731] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.568 02:38:46 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:05.827 [2024-11-21 02:38:46.357192] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:05.827 02:38:46 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:05.827 02:38:46 -- host/failover.sh@31 -- # bdevperf_pid=84968 00:22:05.827 02:38:46 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.827 02:38:46 -- host/failover.sh@34 -- # waitforlisten 84968 /var/tmp/bdevperf.sock 00:22:05.827 02:38:46 -- common/autotest_common.sh@829 -- # '[' -z 84968 ']' 00:22:05.827 02:38:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.827 02:38:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.827 02:38:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.827 02:38:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.827 02:38:46 -- common/autotest_common.sh@10 -- # set +x 00:22:06.763 02:38:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.763 02:38:47 -- common/autotest_common.sh@862 -- # return 0 00:22:06.763 02:38:47 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.331 NVMe0n1 00:22:07.331 02:38:47 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.331 00:22:07.590 02:38:47 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.590 02:38:47 -- host/failover.sh@39 -- # run_test_pid=85017 00:22:07.590 02:38:47 -- host/failover.sh@41 -- # sleep 1 00:22:08.526 02:38:48 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.787 [2024-11-21 02:38:49.227391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227504] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227512] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.787 [2024-11-21 02:38:49.227619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227930] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227974] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.227997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228025] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228047] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228189] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 [2024-11-21 02:38:49.228197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f75b0 is same with the state(5) to be set 00:22:08.788 02:38:49 -- host/failover.sh@45 -- # sleep 3 00:22:12.079 02:38:52 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:12.079 00:22:12.079 02:38:52 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.338 [2024-11-21 02:38:52.832887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.832994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.833003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.833011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.833019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.833028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.338 [2024-11-21 02:38:52.833036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833190] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833205] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833235] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 [2024-11-21 02:38:52.833320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8420 is same with the state(5) to be set 00:22:12.339 02:38:52 -- host/failover.sh@50 -- # sleep 3 00:22:15.626 02:38:55 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.626 [2024-11-21 02:38:56.117571] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.626 02:38:56 -- host/failover.sh@55 -- # sleep 1 00:22:16.562 02:38:57 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:16.821 [2024-11-21 02:38:57.398869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.398997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.399005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.399013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.399022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.821 [2024-11-21 02:38:57.399029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 [2024-11-21 02:38:57.399107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f8fb0 is same with the state(5) to be set 00:22:16.822 02:38:57 -- host/failover.sh@59 -- # wait 85017 00:22:23.415 0 00:22:23.415 02:39:03 -- host/failover.sh@61 -- # killprocess 84968 00:22:23.415 02:39:03 -- common/autotest_common.sh@936 -- # '[' -z 84968 ']' 00:22:23.415 02:39:03 -- common/autotest_common.sh@940 -- # kill -0 84968 00:22:23.415 02:39:03 -- common/autotest_common.sh@941 -- # uname 00:22:23.415 02:39:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:23.415 02:39:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84968 00:22:23.415 killing process with pid 84968 00:22:23.415 02:39:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:23.415 02:39:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:23.415 02:39:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84968' 00:22:23.415 02:39:03 -- common/autotest_common.sh@955 -- # kill 84968 00:22:23.415 02:39:03 -- common/autotest_common.sh@960 -- # wait 84968 00:22:23.415 02:39:03 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:23.415 [2024-11-21 02:38:46.417698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:23.415 [2024-11-21 02:38:46.417829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84968 ] 00:22:23.415 [2024-11-21 02:38:46.551813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.415 [2024-11-21 02:38:46.639444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.415 Running I/O for 15 seconds... 00:22:23.415 [2024-11-21 02:38:49.228602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.415 [2024-11-21 02:38:49.228907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.415 [2024-11-21 02:38:49.228919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.228933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.228946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.228959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.228971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.229979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.229991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.230004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.230056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.230071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.230085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.230098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.416 [2024-11-21 02:38:49.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.416 [2024-11-21 02:38:49.230135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.230961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.230987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.230999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.231060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.417 [2024-11-21 02:38:49.231084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.417 [2024-11-21 02:38:49.231316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.417 [2024-11-21 02:38:49.231328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.231969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.231983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.231995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.232079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.232103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.232138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.232215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.418 [2024-11-21 02:38:49.232240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.418 [2024-11-21 02:38:49.232460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.418 [2024-11-21 02:38:49.232474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25419a0 is same with the state(5) to be set 00:22:23.419 [2024-11-21 02:38:49.232489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.419 [2024-11-21 02:38:49.232499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.419 [2024-11-21 02:38:49.232508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12056 len:8 PRP1 0x0 PRP2 0x0 00:22:23.419 [2024-11-21 02:38:49.232519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:49.232584] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25419a0 was disconnected and freed. reset controller. 00:22:23.419 [2024-11-21 02:38:49.232603] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:23.419 [2024-11-21 02:38:49.232655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.419 [2024-11-21 02:38:49.232675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:49.232689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.419 [2024-11-21 02:38:49.232708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:49.232721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.419 [2024-11-21 02:38:49.232733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:49.232785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.419 [2024-11-21 02:38:49.232799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:49.232812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.419 [2024-11-21 02:38:49.232868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc440 (9): Bad file descriptor 00:22:23.419 [2024-11-21 02:38:49.235165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:23.419 [2024-11-21 02:38:49.252978] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:23.419 [2024-11-21 02:38:52.833399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.419 [2024-11-21 02:38:52.833507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.419 [2024-11-21 02:38:52.833580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.833977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.833988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.834032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.834059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.834083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.834107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.419 [2024-11-21 02:38:52.834131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.419 [2024-11-21 02:38:52.834163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.419 [2024-11-21 02:38:52.834189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.419 [2024-11-21 02:38:52.834213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.419 [2024-11-21 02:38:52.834236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.419 [2024-11-21 02:38:52.834249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.834942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.834978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.420 [2024-11-21 02:38:52.834989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.835002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.835013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.835025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.835037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.835051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.835063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.835075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.420 [2024-11-21 02:38:52.835096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.420 [2024-11-21 02:38:52.835109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.835930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.835979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.835992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.836004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.836016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.836027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.836040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.836051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.836063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.836073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.836085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.421 [2024-11-21 02:38:52.836097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.421 [2024-11-21 02:38:52.836109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.421 [2024-11-21 02:38:52.836133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.422 [2024-11-21 02:38:52.836526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:52.836731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c4ae0 is same with the state(5) to be set 00:22:23.422 [2024-11-21 02:38:52.836782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.422 [2024-11-21 02:38:52.836792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.422 [2024-11-21 02:38:52.836800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61776 len:8 PRP1 0x0 PRP2 0x0 00:22:23.422 [2024-11-21 02:38:52.836812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836845] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24c4ae0 was disconnected and freed. reset controller. 00:22:23.422 [2024-11-21 02:38:52.836870] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:23.422 [2024-11-21 02:38:52.836915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.422 [2024-11-21 02:38:52.836934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.422 [2024-11-21 02:38:52.836957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.422 [2024-11-21 02:38:52.836978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.836989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.422 [2024-11-21 02:38:52.837000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:52.837010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.422 [2024-11-21 02:38:52.839064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:23.422 [2024-11-21 02:38:52.839097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc440 (9): Bad file descriptor 00:22:23.422 [2024-11-21 02:38:52.860572] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:23.422 [2024-11-21 02:38:57.399223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:57.399280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:57.399307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:57.399332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:57.399356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:57.399398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.422 [2024-11-21 02:38:57.399422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.422 [2024-11-21 02:38:57.399434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.399937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.399962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.399975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.399986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.400037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.400211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.400236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.400388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.423 [2024-11-21 02:38:57.400414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.423 [2024-11-21 02:38:57.400463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.423 [2024-11-21 02:38:57.400477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.400952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.400976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.400989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.424 [2024-11-21 02:38:57.401364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.424 [2024-11-21 02:38:57.401384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.424 [2024-11-21 02:38:57.401397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.401975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.401988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.401999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.402047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.402085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.402156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.425 [2024-11-21 02:38:57.402242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.425 [2024-11-21 02:38:57.402520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.425 [2024-11-21 02:38:57.402533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.426 [2024-11-21 02:38:57.402695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x253f1f0 is same with the state(5) to be set 00:22:23.426 [2024-11-21 02:38:57.402721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.426 [2024-11-21 02:38:57.402730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.426 [2024-11-21 02:38:57.402749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105600 len:8 PRP1 0x0 PRP2 0x0 00:22:23.426 [2024-11-21 02:38:57.402763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402796] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x253f1f0 was disconnected and freed. reset controller. 00:22:23.426 [2024-11-21 02:38:57.402811] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:23.426 [2024-11-21 02:38:57.402858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.426 [2024-11-21 02:38:57.402886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.426 [2024-11-21 02:38:57.402911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.426 [2024-11-21 02:38:57.402933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.426 [2024-11-21 02:38:57.402955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.426 [2024-11-21 02:38:57.402965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:23.426 [2024-11-21 02:38:57.405049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:23.426 [2024-11-21 02:38:57.405081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc440 (9): Bad file descriptor 00:22:23.426 [2024-11-21 02:38:57.429289] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:23.426 00:22:23.426 Latency(us) 00:22:23.426 [2024-11-21T02:39:04.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.426 [2024-11-21T02:39:04.073Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:23.426 Verification LBA range: start 0x0 length 0x4000 00:22:23.426 NVMe0n1 : 15.01 15199.09 59.37 265.27 0.00 8262.45 558.55 15371.17 00:22:23.426 [2024-11-21T02:39:04.073Z] =================================================================================================================== 00:22:23.426 [2024-11-21T02:39:04.073Z] Total : 15199.09 59.37 265.27 0.00 8262.45 558.55 15371.17 00:22:23.426 Received shutdown signal, test time was about 15.000000 seconds 00:22:23.426 00:22:23.426 Latency(us) 00:22:23.426 [2024-11-21T02:39:04.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.426 [2024-11-21T02:39:04.073Z] =================================================================================================================== 00:22:23.426 [2024-11-21T02:39:04.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.426 02:39:03 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:23.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.426 02:39:03 -- host/failover.sh@65 -- # count=3 00:22:23.426 02:39:03 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:23.426 02:39:03 -- host/failover.sh@73 -- # bdevperf_pid=85220 00:22:23.426 02:39:03 -- host/failover.sh@75 -- # waitforlisten 85220 /var/tmp/bdevperf.sock 00:22:23.426 02:39:03 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:23.426 02:39:03 -- common/autotest_common.sh@829 -- # '[' -z 85220 ']' 00:22:23.426 02:39:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.426 02:39:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.426 02:39:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.426 02:39:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.426 02:39:03 -- common/autotest_common.sh@10 -- # set +x 00:22:23.994 02:39:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.994 02:39:04 -- common/autotest_common.sh@862 -- # return 0 00:22:23.994 02:39:04 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:24.253 [2024-11-21 02:39:04.657150] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:24.253 02:39:04 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:24.512 [2024-11-21 02:39:04.921495] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:24.512 02:39:04 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:24.771 NVMe0n1 00:22:24.771 02:39:05 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.029 00:22:25.029 02:39:05 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.288 00:22:25.288 02:39:05 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.288 02:39:05 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:25.547 02:39:06 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.805 02:39:06 -- host/failover.sh@87 -- # sleep 3 00:22:29.091 02:39:09 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:29.091 02:39:09 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:29.091 02:39:09 -- host/failover.sh@90 -- # run_test_pid=85357 00:22:29.091 02:39:09 -- host/failover.sh@92 -- # wait 85357 00:22:29.091 02:39:09 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.028 0 00:22:30.028 02:39:10 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:30.028 [2024-11-21 02:39:03.519000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:30.028 [2024-11-21 02:39:03.519104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85220 ] 00:22:30.028 [2024-11-21 02:39:03.652191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.028 [2024-11-21 02:39:03.727795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.028 [2024-11-21 02:39:06.267399] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:30.028 [2024-11-21 02:39:06.267503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.028 [2024-11-21 02:39:06.267525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.028 [2024-11-21 02:39:06.267539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.028 [2024-11-21 02:39:06.267552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.028 [2024-11-21 02:39:06.267563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.028 [2024-11-21 02:39:06.267575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.028 [2024-11-21 02:39:06.267586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.028 [2024-11-21 02:39:06.267597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.028 [2024-11-21 02:39:06.267608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.028 [2024-11-21 02:39:06.267645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.028 [2024-11-21 02:39:06.267671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc9440 (9): Bad file descriptor 00:22:30.028 [2024-11-21 02:39:06.271239] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:30.028 Running I/O for 1 seconds... 00:22:30.028 00:22:30.028 Latency(us) 00:22:30.028 [2024-11-21T02:39:10.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.028 [2024-11-21T02:39:10.675Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:30.028 Verification LBA range: start 0x0 length 0x4000 00:22:30.028 NVMe0n1 : 1.01 15401.58 60.16 0.00 0.00 8276.66 1400.09 9294.20 00:22:30.028 [2024-11-21T02:39:10.675Z] =================================================================================================================== 00:22:30.028 [2024-11-21T02:39:10.675Z] Total : 15401.58 60.16 0.00 0.00 8276.66 1400.09 9294.20 00:22:30.286 02:39:10 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.286 02:39:10 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:30.545 02:39:10 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:30.545 02:39:11 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.545 02:39:11 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:30.804 02:39:11 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.079 02:39:11 -- host/failover.sh@101 -- # sleep 3 00:22:34.379 02:39:14 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:34.379 02:39:14 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:34.379 02:39:14 -- host/failover.sh@108 -- # killprocess 85220 00:22:34.379 02:39:14 -- common/autotest_common.sh@936 -- # '[' -z 85220 ']' 00:22:34.379 02:39:14 -- common/autotest_common.sh@940 -- # kill -0 85220 00:22:34.379 02:39:14 -- common/autotest_common.sh@941 -- # uname 00:22:34.379 02:39:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:34.379 02:39:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85220 00:22:34.379 02:39:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:34.379 02:39:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:34.379 killing process with pid 85220 00:22:34.379 02:39:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85220' 00:22:34.379 02:39:14 -- common/autotest_common.sh@955 -- # kill 85220 00:22:34.379 02:39:14 -- common/autotest_common.sh@960 -- # wait 85220 00:22:34.638 02:39:15 -- host/failover.sh@110 -- # sync 00:22:34.638 02:39:15 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:34.897 02:39:15 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:34.897 02:39:15 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:34.897 02:39:15 -- host/failover.sh@116 -- # nvmftestfini 00:22:34.897 02:39:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.897 02:39:15 -- nvmf/common.sh@116 -- # sync 00:22:34.897 02:39:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.897 02:39:15 -- nvmf/common.sh@119 -- # set +e 00:22:34.898 02:39:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.898 02:39:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.898 rmmod nvme_tcp 00:22:34.898 rmmod nvme_fabrics 00:22:35.157 rmmod nvme_keyring 00:22:35.157 02:39:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:35.157 02:39:15 -- nvmf/common.sh@123 -- # set -e 00:22:35.157 02:39:15 -- nvmf/common.sh@124 -- # return 0 00:22:35.157 02:39:15 -- nvmf/common.sh@477 -- # '[' -n 84858 ']' 00:22:35.157 02:39:15 -- nvmf/common.sh@478 -- # killprocess 84858 00:22:35.157 02:39:15 -- common/autotest_common.sh@936 -- # '[' -z 84858 ']' 00:22:35.157 02:39:15 -- common/autotest_common.sh@940 -- # kill -0 84858 00:22:35.157 02:39:15 -- common/autotest_common.sh@941 -- # uname 00:22:35.157 02:39:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:35.157 02:39:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84858 00:22:35.157 02:39:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:35.157 02:39:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:35.157 02:39:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84858' 00:22:35.157 killing process with pid 84858 00:22:35.157 02:39:15 -- common/autotest_common.sh@955 -- # kill 84858 00:22:35.157 02:39:15 -- common/autotest_common.sh@960 -- # wait 84858 00:22:35.416 02:39:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:35.416 02:39:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:35.416 02:39:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:35.416 02:39:15 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.416 02:39:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:35.416 02:39:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.416 02:39:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.416 02:39:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.416 02:39:15 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:35.416 00:22:35.416 real 0m32.943s 00:22:35.416 user 2m6.932s 00:22:35.416 sys 0m5.265s 00:22:35.416 02:39:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:35.416 02:39:15 -- common/autotest_common.sh@10 -- # set +x 00:22:35.416 ************************************ 00:22:35.416 END TEST nvmf_failover 00:22:35.416 ************************************ 00:22:35.416 02:39:15 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:35.416 02:39:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:35.416 02:39:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:35.416 02:39:15 -- common/autotest_common.sh@10 -- # set +x 00:22:35.416 ************************************ 00:22:35.416 START TEST nvmf_discovery 00:22:35.416 ************************************ 00:22:35.416 02:39:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:35.416 * Looking for test storage... 00:22:35.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:35.416 02:39:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:35.416 02:39:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:35.416 02:39:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:35.675 02:39:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:35.675 02:39:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:35.675 02:39:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:35.675 02:39:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:35.675 02:39:16 -- scripts/common.sh@335 -- # IFS=.-: 00:22:35.675 02:39:16 -- scripts/common.sh@335 -- # read -ra ver1 00:22:35.675 02:39:16 -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.675 02:39:16 -- scripts/common.sh@336 -- # read -ra ver2 00:22:35.675 02:39:16 -- scripts/common.sh@337 -- # local 'op=<' 00:22:35.675 02:39:16 -- scripts/common.sh@339 -- # ver1_l=2 00:22:35.675 02:39:16 -- scripts/common.sh@340 -- # ver2_l=1 00:22:35.675 02:39:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:35.675 02:39:16 -- scripts/common.sh@343 -- # case "$op" in 00:22:35.675 02:39:16 -- scripts/common.sh@344 -- # : 1 00:22:35.675 02:39:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:35.675 02:39:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.675 02:39:16 -- scripts/common.sh@364 -- # decimal 1 00:22:35.675 02:39:16 -- scripts/common.sh@352 -- # local d=1 00:22:35.675 02:39:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.675 02:39:16 -- scripts/common.sh@354 -- # echo 1 00:22:35.675 02:39:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:35.675 02:39:16 -- scripts/common.sh@365 -- # decimal 2 00:22:35.675 02:39:16 -- scripts/common.sh@352 -- # local d=2 00:22:35.675 02:39:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.675 02:39:16 -- scripts/common.sh@354 -- # echo 2 00:22:35.675 02:39:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:35.675 02:39:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:35.675 02:39:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:35.675 02:39:16 -- scripts/common.sh@367 -- # return 0 00:22:35.675 02:39:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.675 02:39:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.675 --rc genhtml_branch_coverage=1 00:22:35.675 --rc genhtml_function_coverage=1 00:22:35.675 --rc genhtml_legend=1 00:22:35.675 --rc geninfo_all_blocks=1 00:22:35.675 --rc geninfo_unexecuted_blocks=1 00:22:35.675 00:22:35.675 ' 00:22:35.675 02:39:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.675 --rc genhtml_branch_coverage=1 00:22:35.675 --rc genhtml_function_coverage=1 00:22:35.675 --rc genhtml_legend=1 00:22:35.675 --rc geninfo_all_blocks=1 00:22:35.675 --rc geninfo_unexecuted_blocks=1 00:22:35.675 00:22:35.675 ' 00:22:35.675 02:39:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.675 --rc genhtml_branch_coverage=1 00:22:35.675 --rc genhtml_function_coverage=1 00:22:35.675 --rc genhtml_legend=1 00:22:35.675 --rc geninfo_all_blocks=1 00:22:35.675 --rc geninfo_unexecuted_blocks=1 00:22:35.675 00:22:35.675 ' 00:22:35.675 02:39:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:35.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.675 --rc genhtml_branch_coverage=1 00:22:35.675 --rc genhtml_function_coverage=1 00:22:35.675 --rc genhtml_legend=1 00:22:35.675 --rc geninfo_all_blocks=1 00:22:35.675 --rc geninfo_unexecuted_blocks=1 00:22:35.675 00:22:35.675 ' 00:22:35.675 02:39:16 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:35.675 02:39:16 -- nvmf/common.sh@7 -- # uname -s 00:22:35.675 02:39:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.675 02:39:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.675 02:39:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.675 02:39:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.675 02:39:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.675 02:39:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.675 02:39:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.675 02:39:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.675 02:39:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.675 02:39:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.675 02:39:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:22:35.675 02:39:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:22:35.675 02:39:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.675 02:39:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.675 02:39:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:35.675 02:39:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.675 02:39:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.675 02:39:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.675 02:39:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.675 02:39:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.675 02:39:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.675 02:39:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.675 02:39:16 -- paths/export.sh@5 -- # export PATH 00:22:35.675 02:39:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.675 02:39:16 -- nvmf/common.sh@46 -- # : 0 00:22:35.675 02:39:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:35.675 02:39:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:35.675 02:39:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:35.675 02:39:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.676 02:39:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.676 02:39:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:35.676 02:39:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:35.676 02:39:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:35.676 02:39:16 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:35.676 02:39:16 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:35.676 02:39:16 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:35.676 02:39:16 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:35.676 02:39:16 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:35.676 02:39:16 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:35.676 02:39:16 -- host/discovery.sh@25 -- # nvmftestinit 00:22:35.676 02:39:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:35.676 02:39:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.676 02:39:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:35.676 02:39:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:35.676 02:39:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:35.676 02:39:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.676 02:39:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.676 02:39:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.676 02:39:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:35.676 02:39:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:35.676 02:39:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:35.676 02:39:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:35.676 02:39:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:35.676 02:39:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:35.676 02:39:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.676 02:39:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.676 02:39:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:35.676 02:39:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:35.676 02:39:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.676 02:39:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.676 02:39:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.676 02:39:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.676 02:39:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.676 02:39:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.676 02:39:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.676 02:39:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.676 02:39:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:35.676 02:39:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:35.676 Cannot find device "nvmf_tgt_br" 00:22:35.676 02:39:16 -- nvmf/common.sh@154 -- # true 00:22:35.676 02:39:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.676 Cannot find device "nvmf_tgt_br2" 00:22:35.676 02:39:16 -- nvmf/common.sh@155 -- # true 00:22:35.676 02:39:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:35.676 02:39:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:35.676 Cannot find device "nvmf_tgt_br" 00:22:35.676 02:39:16 -- nvmf/common.sh@157 -- # true 00:22:35.676 02:39:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:35.676 Cannot find device "nvmf_tgt_br2" 00:22:35.676 02:39:16 -- nvmf/common.sh@158 -- # true 00:22:35.676 02:39:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:35.676 02:39:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:35.676 02:39:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.676 02:39:16 -- nvmf/common.sh@161 -- # true 00:22:35.676 02:39:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.676 02:39:16 -- nvmf/common.sh@162 -- # true 00:22:35.676 02:39:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.676 02:39:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.676 02:39:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.676 02:39:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.935 02:39:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.935 02:39:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.935 02:39:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.935 02:39:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:35.935 02:39:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:35.935 02:39:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:35.935 02:39:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:35.935 02:39:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:35.935 02:39:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:35.935 02:39:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.935 02:39:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.935 02:39:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.935 02:39:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:35.935 02:39:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:35.935 02:39:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.935 02:39:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.935 02:39:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.935 02:39:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.935 02:39:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.935 02:39:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:35.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:22:35.935 00:22:35.935 --- 10.0.0.2 ping statistics --- 00:22:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.935 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:35.935 02:39:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:35.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:22:35.935 00:22:35.935 --- 10.0.0.3 ping statistics --- 00:22:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.935 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:35.935 02:39:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:22:35.935 00:22:35.935 --- 10.0.0.1 ping statistics --- 00:22:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.935 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:35.935 02:39:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.935 02:39:16 -- nvmf/common.sh@421 -- # return 0 00:22:35.935 02:39:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:35.935 02:39:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.935 02:39:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:35.935 02:39:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:35.935 02:39:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.935 02:39:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:35.935 02:39:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:35.936 02:39:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:35.936 02:39:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:35.936 02:39:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.936 02:39:16 -- common/autotest_common.sh@10 -- # set +x 00:22:35.936 02:39:16 -- nvmf/common.sh@469 -- # nvmfpid=85672 00:22:35.936 02:39:16 -- nvmf/common.sh@470 -- # waitforlisten 85672 00:22:35.936 02:39:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:35.936 02:39:16 -- common/autotest_common.sh@829 -- # '[' -z 85672 ']' 00:22:35.936 02:39:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.936 02:39:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.936 02:39:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.936 02:39:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.936 02:39:16 -- common/autotest_common.sh@10 -- # set +x 00:22:35.936 [2024-11-21 02:39:16.551278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:35.936 [2024-11-21 02:39:16.551366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.195 [2024-11-21 02:39:16.690961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.195 [2024-11-21 02:39:16.770628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:36.195 [2024-11-21 02:39:16.770768] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.195 [2024-11-21 02:39:16.770793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.195 [2024-11-21 02:39:16.770802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.195 [2024-11-21 02:39:16.770837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.132 02:39:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.132 02:39:17 -- common/autotest_common.sh@862 -- # return 0 00:22:37.132 02:39:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:37.132 02:39:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.132 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.132 02:39:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.132 02:39:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:37.132 02:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.132 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.132 [2024-11-21 02:39:17.612449] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.132 02:39:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.132 02:39:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:37.132 02:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.132 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.132 [2024-11-21 02:39:17.620565] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:37.132 02:39:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.132 02:39:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:37.132 02:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.132 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.132 null0 00:22:37.132 02:39:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.133 02:39:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:37.133 02:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.133 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.133 null1 00:22:37.133 02:39:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.133 02:39:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:37.133 02:39:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.133 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.133 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:37.133 02:39:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.133 02:39:17 -- host/discovery.sh@45 -- # hostpid=85722 00:22:37.133 02:39:17 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:37.133 02:39:17 -- host/discovery.sh@46 -- # waitforlisten 85722 /tmp/host.sock 00:22:37.133 02:39:17 -- common/autotest_common.sh@829 -- # '[' -z 85722 ']' 00:22:37.133 02:39:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:37.133 02:39:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.133 02:39:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:37.133 02:39:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.133 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:22:37.133 [2024-11-21 02:39:17.712077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:37.133 [2024-11-21 02:39:17.712350] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85722 ] 00:22:37.391 [2024-11-21 02:39:17.849853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.391 [2024-11-21 02:39:17.966703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:37.391 [2024-11-21 02:39:17.967257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.328 02:39:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.328 02:39:18 -- common/autotest_common.sh@862 -- # return 0 00:22:38.328 02:39:18 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.328 02:39:18 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:38.328 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.328 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.328 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.328 02:39:18 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:38.328 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.328 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.328 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.328 02:39:18 -- host/discovery.sh@72 -- # notify_id=0 00:22:38.328 02:39:18 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.328 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.328 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # sort 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # xargs 00:22:38.328 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.328 02:39:18 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:38.328 02:39:18 -- host/discovery.sh@79 -- # get_bdev_list 00:22:38.328 02:39:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.328 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.328 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.328 02:39:18 -- host/discovery.sh@55 -- # sort 00:22:38.328 02:39:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.328 02:39:18 -- host/discovery.sh@55 -- # xargs 00:22:38.328 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.328 02:39:18 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:38.328 02:39:18 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:38.328 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.328 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.328 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.328 02:39:18 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.328 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.328 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # sort 00:22:38.328 02:39:18 -- host/discovery.sh@59 -- # xargs 00:22:38.328 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.328 02:39:18 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:38.329 02:39:18 -- host/discovery.sh@83 -- # get_bdev_list 00:22:38.329 02:39:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.329 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.329 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.329 02:39:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.329 02:39:18 -- host/discovery.sh@55 -- # sort 00:22:38.329 02:39:18 -- host/discovery.sh@55 -- # xargs 00:22:38.329 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:18 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:38.588 02:39:18 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:38.588 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 02:39:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:18 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:38.588 02:39:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.588 02:39:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.588 02:39:18 -- host/discovery.sh@59 -- # sort 00:22:38.588 02:39:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:18 -- host/discovery.sh@59 -- # xargs 00:22:38.588 02:39:18 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:19 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:38.588 02:39:19 -- host/discovery.sh@87 -- # get_bdev_list 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # sort 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # xargs 00:22:38.588 02:39:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:19 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:38.588 02:39:19 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:38.588 02:39:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 [2024-11-21 02:39:19.104923] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.588 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:19 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:38.588 02:39:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.588 02:39:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:19 -- host/discovery.sh@59 -- # sort 00:22:38.588 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 02:39:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.588 02:39:19 -- host/discovery.sh@59 -- # xargs 00:22:38.588 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:19 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:38.588 02:39:19 -- host/discovery.sh@93 -- # get_bdev_list 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.588 02:39:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # sort 00:22:38.588 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 02:39:19 -- host/discovery.sh@55 -- # xargs 00:22:38.588 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.588 02:39:19 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:38.588 02:39:19 -- host/discovery.sh@94 -- # get_notification_count 00:22:38.588 02:39:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:38.588 02:39:19 -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.588 02:39:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.588 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.588 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.846 02:39:19 -- host/discovery.sh@74 -- # notification_count=0 00:22:38.846 02:39:19 -- host/discovery.sh@75 -- # notify_id=0 00:22:38.846 02:39:19 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:38.846 02:39:19 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:38.846 02:39:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.846 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.846 02:39:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.846 02:39:19 -- host/discovery.sh@100 -- # sleep 1 00:22:39.105 [2024-11-21 02:39:19.745257] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:39.105 [2024-11-21 02:39:19.745283] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:39.105 [2024-11-21 02:39:19.745300] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:39.364 [2024-11-21 02:39:19.831349] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:39.364 [2024-11-21 02:39:19.887093] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:39.364 [2024-11-21 02:39:19.887123] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:39.930 02:39:20 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:39.930 02:39:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:39.930 02:39:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.930 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.930 02:39:20 -- host/discovery.sh@59 -- # sort 00:22:39.930 02:39:20 -- host/discovery.sh@59 -- # xargs 00:22:39.930 02:39:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:39.930 02:39:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@102 -- # get_bdev_list 00:22:39.930 02:39:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.930 02:39:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.930 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.930 02:39:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:39.930 02:39:20 -- host/discovery.sh@55 -- # sort 00:22:39.930 02:39:20 -- host/discovery.sh@55 -- # xargs 00:22:39.930 02:39:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:39.930 02:39:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:39.930 02:39:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:39.930 02:39:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.930 02:39:20 -- host/discovery.sh@63 -- # sort -n 00:22:39.930 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.930 02:39:20 -- host/discovery.sh@63 -- # xargs 00:22:39.930 02:39:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@104 -- # get_notification_count 00:22:39.930 02:39:20 -- host/discovery.sh@74 -- # jq '. | length' 00:22:39.930 02:39:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:39.930 02:39:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.930 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.930 02:39:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@74 -- # notification_count=1 00:22:39.930 02:39:20 -- host/discovery.sh@75 -- # notify_id=1 00:22:39.930 02:39:20 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:39.930 02:39:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.930 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:22:39.930 02:39:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.930 02:39:20 -- host/discovery.sh@109 -- # sleep 1 00:22:41.306 02:39:21 -- host/discovery.sh@110 -- # get_bdev_list 00:22:41.306 02:39:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.306 02:39:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.306 02:39:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.306 02:39:21 -- host/discovery.sh@55 -- # sort 00:22:41.306 02:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:41.306 02:39:21 -- host/discovery.sh@55 -- # xargs 00:22:41.306 02:39:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.306 02:39:21 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:41.306 02:39:21 -- host/discovery.sh@111 -- # get_notification_count 00:22:41.306 02:39:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:41.306 02:39:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.306 02:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:41.306 02:39:21 -- host/discovery.sh@74 -- # jq '. | length' 00:22:41.306 02:39:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.306 02:39:21 -- host/discovery.sh@74 -- # notification_count=1 00:22:41.306 02:39:21 -- host/discovery.sh@75 -- # notify_id=2 00:22:41.306 02:39:21 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:41.306 02:39:21 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:41.306 02:39:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.306 02:39:21 -- common/autotest_common.sh@10 -- # set +x 00:22:41.306 [2024-11-21 02:39:21.625899] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:41.306 [2024-11-21 02:39:21.626997] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:41.306 [2024-11-21 02:39:21.627030] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:41.306 02:39:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.306 02:39:21 -- host/discovery.sh@117 -- # sleep 1 00:22:41.306 [2024-11-21 02:39:21.713051] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:41.306 [2024-11-21 02:39:21.777259] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:41.306 [2024-11-21 02:39:21.777282] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:41.306 [2024-11-21 02:39:21.777288] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:42.243 02:39:22 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:42.243 02:39:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.243 02:39:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.243 02:39:22 -- host/discovery.sh@59 -- # sort 00:22:42.243 02:39:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.243 02:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.243 02:39:22 -- host/discovery.sh@59 -- # xargs 00:22:42.243 02:39:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@119 -- # get_bdev_list 00:22:42.243 02:39:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.243 02:39:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.243 02:39:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.243 02:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.243 02:39:22 -- host/discovery.sh@55 -- # xargs 00:22:42.243 02:39:22 -- host/discovery.sh@55 -- # sort 00:22:42.243 02:39:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:42.243 02:39:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:42.243 02:39:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.243 02:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.243 02:39:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:42.243 02:39:22 -- host/discovery.sh@63 -- # xargs 00:22:42.243 02:39:22 -- host/discovery.sh@63 -- # sort -n 00:22:42.243 02:39:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@121 -- # get_notification_count 00:22:42.243 02:39:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:42.243 02:39:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.243 02:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.243 02:39:22 -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.243 02:39:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@74 -- # notification_count=0 00:22:42.243 02:39:22 -- host/discovery.sh@75 -- # notify_id=2 00:22:42.243 02:39:22 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:42.243 02:39:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.243 02:39:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.243 [2024-11-21 02:39:22.858515] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:42.243 [2024-11-21 02:39:22.858543] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.243 02:39:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.243 02:39:22 -- host/discovery.sh@127 -- # sleep 1 00:22:42.244 [2024-11-21 02:39:22.864838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.244 [2024-11-21 02:39:22.864875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.244 [2024-11-21 02:39:22.864888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.244 [2024-11-21 02:39:22.864897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.244 [2024-11-21 02:39:22.864907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.244 [2024-11-21 02:39:22.864915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.244 [2024-11-21 02:39:22.864923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.244 [2024-11-21 02:39:22.864931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.244 [2024-11-21 02:39:22.864940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.244 [2024-11-21 02:39:22.874797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.244 [2024-11-21 02:39:22.884817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.244 [2024-11-21 02:39:22.884923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.244 [2024-11-21 02:39:22.884968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.244 [2024-11-21 02:39:22.884983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe419c0 with addr=10.0.0.2, port=4420 00:22:42.244 [2024-11-21 02:39:22.884993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.244 [2024-11-21 02:39:22.885008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.244 [2024-11-21 02:39:22.885021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.244 [2024-11-21 02:39:22.885045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.244 [2024-11-21 02:39:22.885054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.244 [2024-11-21 02:39:22.885069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.503 [2024-11-21 02:39:22.894882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.503 [2024-11-21 02:39:22.894973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.895016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.895042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe419c0 with addr=10.0.0.2, port=4420 00:22:42.503 [2024-11-21 02:39:22.895051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.503 [2024-11-21 02:39:22.895065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.503 [2024-11-21 02:39:22.895078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.503 [2024-11-21 02:39:22.895102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.503 [2024-11-21 02:39:22.895110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.503 [2024-11-21 02:39:22.895123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.503 [2024-11-21 02:39:22.904945] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.503 [2024-11-21 02:39:22.905040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.905084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.905099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe419c0 with addr=10.0.0.2, port=4420 00:22:42.503 [2024-11-21 02:39:22.905108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.503 [2024-11-21 02:39:22.905122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.503 [2024-11-21 02:39:22.905135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.503 [2024-11-21 02:39:22.905142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.503 [2024-11-21 02:39:22.905150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.503 [2024-11-21 02:39:22.905163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.503 [2024-11-21 02:39:22.914993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.503 [2024-11-21 02:39:22.915082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.915124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.915139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe419c0 with addr=10.0.0.2, port=4420 00:22:42.503 [2024-11-21 02:39:22.915147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.503 [2024-11-21 02:39:22.915161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.503 [2024-11-21 02:39:22.915173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.503 [2024-11-21 02:39:22.915180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.503 [2024-11-21 02:39:22.915188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.503 [2024-11-21 02:39:22.915200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.503 [2024-11-21 02:39:22.925053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.503 [2024-11-21 02:39:22.925122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.925162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.925176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe419c0 with addr=10.0.0.2, port=4420 00:22:42.503 [2024-11-21 02:39:22.925185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.503 [2024-11-21 02:39:22.925198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.503 [2024-11-21 02:39:22.925210] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.503 [2024-11-21 02:39:22.925227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.503 [2024-11-21 02:39:22.925234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.503 [2024-11-21 02:39:22.925246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.503 [2024-11-21 02:39:22.935096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.503 [2024-11-21 02:39:22.935166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.935206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.503 [2024-11-21 02:39:22.935220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe419c0 with addr=10.0.0.2, port=4420 00:22:42.503 [2024-11-21 02:39:22.935228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe419c0 is same with the state(5) to be set 00:22:42.503 [2024-11-21 02:39:22.935241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe419c0 (9): Bad file descriptor 00:22:42.503 [2024-11-21 02:39:22.935253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.503 [2024-11-21 02:39:22.935259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.503 [2024-11-21 02:39:22.935267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.503 [2024-11-21 02:39:22.935278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.503 [2024-11-21 02:39:22.944578] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:42.503 [2024-11-21 02:39:22.944604] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.443 02:39:23 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:43.443 02:39:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.443 02:39:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.443 02:39:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.443 02:39:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.443 02:39:23 -- host/discovery.sh@59 -- # sort 00:22:43.443 02:39:23 -- host/discovery.sh@59 -- # xargs 00:22:43.443 02:39:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.443 02:39:23 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.443 02:39:23 -- host/discovery.sh@129 -- # get_bdev_list 00:22:43.443 02:39:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.443 02:39:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.443 02:39:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.443 02:39:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.443 02:39:23 -- host/discovery.sh@55 -- # sort 00:22:43.443 02:39:23 -- host/discovery.sh@55 -- # xargs 00:22:43.443 02:39:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.443 02:39:23 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.443 02:39:23 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:43.443 02:39:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:43.443 02:39:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.443 02:39:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:43.443 02:39:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.443 02:39:23 -- host/discovery.sh@63 -- # xargs 00:22:43.443 02:39:23 -- host/discovery.sh@63 -- # sort -n 00:22:43.443 02:39:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.443 02:39:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:43.443 02:39:24 -- host/discovery.sh@131 -- # get_notification_count 00:22:43.443 02:39:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:43.443 02:39:24 -- host/discovery.sh@74 -- # jq '. | length' 00:22:43.443 02:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.443 02:39:24 -- common/autotest_common.sh@10 -- # set +x 00:22:43.443 02:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.702 02:39:24 -- host/discovery.sh@74 -- # notification_count=0 00:22:43.702 02:39:24 -- host/discovery.sh@75 -- # notify_id=2 00:22:43.702 02:39:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:43.702 02:39:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:43.702 02:39:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.702 02:39:24 -- common/autotest_common.sh@10 -- # set +x 00:22:43.702 02:39:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.702 02:39:24 -- host/discovery.sh@135 -- # sleep 1 00:22:44.638 02:39:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:44.638 02:39:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:44.638 02:39:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:44.638 02:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.638 02:39:25 -- host/discovery.sh@59 -- # sort 00:22:44.638 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:22:44.638 02:39:25 -- host/discovery.sh@59 -- # xargs 00:22:44.638 02:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.638 02:39:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:44.638 02:39:25 -- host/discovery.sh@137 -- # get_bdev_list 00:22:44.638 02:39:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.638 02:39:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:44.638 02:39:25 -- host/discovery.sh@55 -- # sort 00:22:44.638 02:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.638 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:22:44.638 02:39:25 -- host/discovery.sh@55 -- # xargs 00:22:44.638 02:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.638 02:39:25 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:44.638 02:39:25 -- host/discovery.sh@138 -- # get_notification_count 00:22:44.638 02:39:25 -- host/discovery.sh@74 -- # jq '. | length' 00:22:44.638 02:39:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:44.638 02:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.638 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:22:44.638 02:39:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.638 02:39:25 -- host/discovery.sh@74 -- # notification_count=2 00:22:44.638 02:39:25 -- host/discovery.sh@75 -- # notify_id=4 00:22:44.639 02:39:25 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:44.639 02:39:25 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:44.639 02:39:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.639 02:39:25 -- common/autotest_common.sh@10 -- # set +x 00:22:46.017 [2024-11-21 02:39:26.285842] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:46.017 [2024-11-21 02:39:26.285859] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:46.017 [2024-11-21 02:39:26.285873] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:46.017 [2024-11-21 02:39:26.371945] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:46.017 [2024-11-21 02:39:26.430747] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:46.017 [2024-11-21 02:39:26.430790] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:46.017 02:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.017 02:39:26 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.017 02:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:22:46.017 02:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.017 02:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:46.017 02:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.017 02:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:46.017 02:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.017 02:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.017 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.017 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.017 2024/11/21 02:39:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:46.017 request: 00:22:46.017 { 00:22:46.017 "method": "bdev_nvme_start_discovery", 00:22:46.017 "params": { 00:22:46.017 "name": "nvme", 00:22:46.017 "trtype": "tcp", 00:22:46.017 "traddr": "10.0.0.2", 00:22:46.017 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:46.017 "adrfam": "ipv4", 00:22:46.017 "trsvcid": "8009", 00:22:46.017 "wait_for_attach": true 00:22:46.017 } 00:22:46.017 } 00:22:46.017 Got JSON-RPC error response 00:22:46.017 GoRPCClient: error on JSON-RPC call 00:22:46.017 02:39:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:46.017 02:39:26 -- common/autotest_common.sh@653 -- # es=1 00:22:46.017 02:39:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.017 02:39:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.017 02:39:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.017 02:39:26 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # sort 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:46.017 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.017 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # xargs 00:22:46.017 02:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.017 02:39:26 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:46.017 02:39:26 -- host/discovery.sh@147 -- # get_bdev_list 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # xargs 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:46.017 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # sort 00:22:46.017 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.017 02:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.017 02:39:26 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:46.017 02:39:26 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.017 02:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:22:46.017 02:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.017 02:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:46.017 02:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.017 02:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:46.017 02:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.017 02:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:46.017 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.017 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.017 2024/11/21 02:39:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:46.017 request: 00:22:46.017 { 00:22:46.017 "method": "bdev_nvme_start_discovery", 00:22:46.017 "params": { 00:22:46.017 "name": "nvme_second", 00:22:46.017 "trtype": "tcp", 00:22:46.017 "traddr": "10.0.0.2", 00:22:46.017 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:46.017 "adrfam": "ipv4", 00:22:46.017 "trsvcid": "8009", 00:22:46.017 "wait_for_attach": true 00:22:46.017 } 00:22:46.017 } 00:22:46.017 Got JSON-RPC error response 00:22:46.017 GoRPCClient: error on JSON-RPC call 00:22:46.017 02:39:26 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:46.017 02:39:26 -- common/autotest_common.sh@653 -- # es=1 00:22:46.017 02:39:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:46.017 02:39:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:46.017 02:39:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:46.017 02:39:26 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:46.017 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.017 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # sort 00:22:46.017 02:39:26 -- host/discovery.sh@67 -- # xargs 00:22:46.017 02:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.017 02:39:26 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:46.017 02:39:26 -- host/discovery.sh@153 -- # get_bdev_list 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # sort 00:22:46.017 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.017 02:39:26 -- host/discovery.sh@55 -- # xargs 00:22:46.017 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:46.276 02:39:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.276 02:39:26 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:46.276 02:39:26 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:46.276 02:39:26 -- common/autotest_common.sh@650 -- # local es=0 00:22:46.276 02:39:26 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:46.276 02:39:26 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:46.276 02:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.276 02:39:26 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:46.276 02:39:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:46.276 02:39:26 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:46.276 02:39:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.276 02:39:26 -- common/autotest_common.sh@10 -- # set +x 00:22:47.212 [2024-11-21 02:39:27.693055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.212 [2024-11-21 02:39:27.693122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.212 [2024-11-21 02:39:27.693139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d970 with addr=10.0.0.2, port=8010 00:22:47.212 [2024-11-21 02:39:27.693151] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:47.212 [2024-11-21 02:39:27.693160] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:47.212 [2024-11-21 02:39:27.693167] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:48.148 [2024-11-21 02:39:28.693038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.148 [2024-11-21 02:39:28.693114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.148 [2024-11-21 02:39:28.693131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3d970 with addr=10.0.0.2, port=8010 00:22:48.148 [2024-11-21 02:39:28.693143] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:48.148 [2024-11-21 02:39:28.693151] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:48.148 [2024-11-21 02:39:28.693158] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:49.084 [2024-11-21 02:39:29.692975] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:49.084 2024/11/21 02:39:29 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:49.084 request: 00:22:49.084 { 00:22:49.084 "method": "bdev_nvme_start_discovery", 00:22:49.084 "params": { 00:22:49.084 "name": "nvme_second", 00:22:49.084 "trtype": "tcp", 00:22:49.084 "traddr": "10.0.0.2", 00:22:49.084 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:49.084 "adrfam": "ipv4", 00:22:49.084 "trsvcid": "8010", 00:22:49.084 "attach_timeout_ms": 3000 00:22:49.084 } 00:22:49.084 } 00:22:49.084 Got JSON-RPC error response 00:22:49.084 GoRPCClient: error on JSON-RPC call 00:22:49.084 02:39:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:49.084 02:39:29 -- common/autotest_common.sh@653 -- # es=1 00:22:49.084 02:39:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.084 02:39:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.084 02:39:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.084 02:39:29 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:49.084 02:39:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:49.084 02:39:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:49.084 02:39:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.084 02:39:29 -- common/autotest_common.sh@10 -- # set +x 00:22:49.084 02:39:29 -- host/discovery.sh@67 -- # sort 00:22:49.084 02:39:29 -- host/discovery.sh@67 -- # xargs 00:22:49.084 02:39:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.343 02:39:29 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:49.343 02:39:29 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:49.343 02:39:29 -- host/discovery.sh@162 -- # kill 85722 00:22:49.343 02:39:29 -- host/discovery.sh@163 -- # nvmftestfini 00:22:49.343 02:39:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:49.343 02:39:29 -- nvmf/common.sh@116 -- # sync 00:22:49.343 02:39:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:49.343 02:39:29 -- nvmf/common.sh@119 -- # set +e 00:22:49.343 02:39:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:49.343 02:39:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:49.343 rmmod nvme_tcp 00:22:49.343 rmmod nvme_fabrics 00:22:49.343 rmmod nvme_keyring 00:22:49.343 02:39:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:49.344 02:39:29 -- nvmf/common.sh@123 -- # set -e 00:22:49.344 02:39:29 -- nvmf/common.sh@124 -- # return 0 00:22:49.344 02:39:29 -- nvmf/common.sh@477 -- # '[' -n 85672 ']' 00:22:49.344 02:39:29 -- nvmf/common.sh@478 -- # killprocess 85672 00:22:49.344 02:39:29 -- common/autotest_common.sh@936 -- # '[' -z 85672 ']' 00:22:49.344 02:39:29 -- common/autotest_common.sh@940 -- # kill -0 85672 00:22:49.344 02:39:29 -- common/autotest_common.sh@941 -- # uname 00:22:49.344 02:39:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:49.344 02:39:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85672 00:22:49.344 02:39:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:49.344 02:39:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:49.344 killing process with pid 85672 00:22:49.344 02:39:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85672' 00:22:49.344 02:39:29 -- common/autotest_common.sh@955 -- # kill 85672 00:22:49.344 02:39:29 -- common/autotest_common.sh@960 -- # wait 85672 00:22:49.603 02:39:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:49.603 02:39:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:49.603 02:39:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:49.603 02:39:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:49.603 02:39:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:49.603 02:39:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.603 02:39:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.603 02:39:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.603 02:39:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:49.603 ************************************ 00:22:49.603 END TEST nvmf_discovery 00:22:49.603 ************************************ 00:22:49.603 00:22:49.603 real 0m14.230s 00:22:49.603 user 0m27.920s 00:22:49.603 sys 0m1.710s 00:22:49.603 02:39:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:49.603 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:49.603 02:39:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:49.603 02:39:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:49.603 02:39:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.603 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:49.603 ************************************ 00:22:49.603 START TEST nvmf_discovery_remove_ifc 00:22:49.603 ************************************ 00:22:49.603 02:39:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:49.863 * Looking for test storage... 00:22:49.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:49.863 02:39:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:49.863 02:39:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:49.863 02:39:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:49.863 02:39:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:49.863 02:39:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:49.863 02:39:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:49.863 02:39:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:49.863 02:39:30 -- scripts/common.sh@335 -- # IFS=.-: 00:22:49.863 02:39:30 -- scripts/common.sh@335 -- # read -ra ver1 00:22:49.863 02:39:30 -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.863 02:39:30 -- scripts/common.sh@336 -- # read -ra ver2 00:22:49.863 02:39:30 -- scripts/common.sh@337 -- # local 'op=<' 00:22:49.863 02:39:30 -- scripts/common.sh@339 -- # ver1_l=2 00:22:49.863 02:39:30 -- scripts/common.sh@340 -- # ver2_l=1 00:22:49.863 02:39:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:49.863 02:39:30 -- scripts/common.sh@343 -- # case "$op" in 00:22:49.863 02:39:30 -- scripts/common.sh@344 -- # : 1 00:22:49.863 02:39:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:49.863 02:39:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.863 02:39:30 -- scripts/common.sh@364 -- # decimal 1 00:22:49.863 02:39:30 -- scripts/common.sh@352 -- # local d=1 00:22:49.863 02:39:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.863 02:39:30 -- scripts/common.sh@354 -- # echo 1 00:22:49.863 02:39:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:49.863 02:39:30 -- scripts/common.sh@365 -- # decimal 2 00:22:49.863 02:39:30 -- scripts/common.sh@352 -- # local d=2 00:22:49.863 02:39:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.863 02:39:30 -- scripts/common.sh@354 -- # echo 2 00:22:49.863 02:39:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:49.863 02:39:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:49.863 02:39:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:49.863 02:39:30 -- scripts/common.sh@367 -- # return 0 00:22:49.863 02:39:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.863 02:39:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.863 --rc genhtml_branch_coverage=1 00:22:49.863 --rc genhtml_function_coverage=1 00:22:49.863 --rc genhtml_legend=1 00:22:49.863 --rc geninfo_all_blocks=1 00:22:49.863 --rc geninfo_unexecuted_blocks=1 00:22:49.863 00:22:49.863 ' 00:22:49.863 02:39:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.863 --rc genhtml_branch_coverage=1 00:22:49.863 --rc genhtml_function_coverage=1 00:22:49.863 --rc genhtml_legend=1 00:22:49.863 --rc geninfo_all_blocks=1 00:22:49.863 --rc geninfo_unexecuted_blocks=1 00:22:49.863 00:22:49.863 ' 00:22:49.863 02:39:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.863 --rc genhtml_branch_coverage=1 00:22:49.863 --rc genhtml_function_coverage=1 00:22:49.863 --rc genhtml_legend=1 00:22:49.863 --rc geninfo_all_blocks=1 00:22:49.863 --rc geninfo_unexecuted_blocks=1 00:22:49.863 00:22:49.863 ' 00:22:49.863 02:39:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:49.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.863 --rc genhtml_branch_coverage=1 00:22:49.863 --rc genhtml_function_coverage=1 00:22:49.863 --rc genhtml_legend=1 00:22:49.863 --rc geninfo_all_blocks=1 00:22:49.863 --rc geninfo_unexecuted_blocks=1 00:22:49.863 00:22:49.863 ' 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:49.863 02:39:30 -- nvmf/common.sh@7 -- # uname -s 00:22:49.863 02:39:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.863 02:39:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.863 02:39:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.863 02:39:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.863 02:39:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.863 02:39:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.863 02:39:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.863 02:39:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.863 02:39:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.863 02:39:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.863 02:39:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:22:49.863 02:39:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:22:49.863 02:39:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.863 02:39:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.863 02:39:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:49.863 02:39:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:49.863 02:39:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.863 02:39:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.863 02:39:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.863 02:39:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.863 02:39:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.863 02:39:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.863 02:39:30 -- paths/export.sh@5 -- # export PATH 00:22:49.863 02:39:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.863 02:39:30 -- nvmf/common.sh@46 -- # : 0 00:22:49.863 02:39:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:49.863 02:39:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:49.863 02:39:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:49.863 02:39:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.863 02:39:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.863 02:39:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:49.863 02:39:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:49.863 02:39:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:49.863 02:39:30 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:49.863 02:39:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:49.863 02:39:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.864 02:39:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:49.864 02:39:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:49.864 02:39:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:49.864 02:39:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.864 02:39:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.864 02:39:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.864 02:39:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:49.864 02:39:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:49.864 02:39:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:49.864 02:39:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:49.864 02:39:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:49.864 02:39:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:49.864 02:39:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.864 02:39:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.864 02:39:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:49.864 02:39:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:49.864 02:39:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:49.864 02:39:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:49.864 02:39:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:49.864 02:39:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.864 02:39:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:49.864 02:39:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:49.864 02:39:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:49.864 02:39:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:49.864 02:39:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:49.864 02:39:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:49.864 Cannot find device "nvmf_tgt_br" 00:22:49.864 02:39:30 -- nvmf/common.sh@154 -- # true 00:22:49.864 02:39:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.864 Cannot find device "nvmf_tgt_br2" 00:22:49.864 02:39:30 -- nvmf/common.sh@155 -- # true 00:22:49.864 02:39:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:49.864 02:39:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:49.864 Cannot find device "nvmf_tgt_br" 00:22:49.864 02:39:30 -- nvmf/common.sh@157 -- # true 00:22:49.864 02:39:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:49.864 Cannot find device "nvmf_tgt_br2" 00:22:49.864 02:39:30 -- nvmf/common.sh@158 -- # true 00:22:49.864 02:39:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:50.122 02:39:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:50.122 02:39:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:50.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.122 02:39:30 -- nvmf/common.sh@161 -- # true 00:22:50.122 02:39:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:50.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.122 02:39:30 -- nvmf/common.sh@162 -- # true 00:22:50.122 02:39:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:50.122 02:39:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:50.122 02:39:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:50.122 02:39:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:50.122 02:39:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:50.122 02:39:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:50.122 02:39:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:50.122 02:39:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:50.122 02:39:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:50.122 02:39:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:50.122 02:39:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:50.122 02:39:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:50.122 02:39:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:50.122 02:39:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:50.122 02:39:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:50.122 02:39:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:50.122 02:39:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:50.122 02:39:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:50.122 02:39:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:50.122 02:39:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:50.123 02:39:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:50.123 02:39:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:50.123 02:39:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:50.382 02:39:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:50.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:22:50.382 00:22:50.382 --- 10.0.0.2 ping statistics --- 00:22:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.382 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:50.382 02:39:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:50.382 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:50.382 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:50.382 00:22:50.382 --- 10.0.0.3 ping statistics --- 00:22:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.382 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:50.382 02:39:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:50.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:50.382 00:22:50.382 --- 10.0.0.1 ping statistics --- 00:22:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.382 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:50.382 02:39:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.382 02:39:30 -- nvmf/common.sh@421 -- # return 0 00:22:50.382 02:39:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:50.382 02:39:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.382 02:39:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:50.382 02:39:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:50.382 02:39:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.382 02:39:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:50.382 02:39:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:50.382 02:39:30 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:50.382 02:39:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:50.382 02:39:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.382 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:50.382 02:39:30 -- nvmf/common.sh@469 -- # nvmfpid=86236 00:22:50.382 02:39:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:50.382 02:39:30 -- nvmf/common.sh@470 -- # waitforlisten 86236 00:22:50.382 02:39:30 -- common/autotest_common.sh@829 -- # '[' -z 86236 ']' 00:22:50.382 02:39:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.382 02:39:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.382 02:39:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.382 02:39:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.382 02:39:30 -- common/autotest_common.sh@10 -- # set +x 00:22:50.382 [2024-11-21 02:39:30.872533] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:50.382 [2024-11-21 02:39:30.872646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.382 [2024-11-21 02:39:31.009033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.641 [2024-11-21 02:39:31.086688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:50.641 [2024-11-21 02:39:31.086860] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.641 [2024-11-21 02:39:31.086873] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.641 [2024-11-21 02:39:31.086882] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.641 [2024-11-21 02:39:31.086908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.579 02:39:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.579 02:39:31 -- common/autotest_common.sh@862 -- # return 0 00:22:51.579 02:39:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:51.579 02:39:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.579 02:39:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.579 02:39:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.579 02:39:31 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:51.579 02:39:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.579 02:39:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.579 [2024-11-21 02:39:31.960308] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.579 [2024-11-21 02:39:31.968399] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:51.579 null0 00:22:51.579 [2024-11-21 02:39:32.000340] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.579 02:39:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.579 02:39:32 -- host/discovery_remove_ifc.sh@59 -- # hostpid=86286 00:22:51.579 02:39:32 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:51.579 02:39:32 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 86286 /tmp/host.sock 00:22:51.579 02:39:32 -- common/autotest_common.sh@829 -- # '[' -z 86286 ']' 00:22:51.579 02:39:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:51.579 02:39:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.579 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:51.579 02:39:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:51.579 02:39:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.579 02:39:32 -- common/autotest_common.sh@10 -- # set +x 00:22:51.579 [2024-11-21 02:39:32.083447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:51.579 [2024-11-21 02:39:32.083984] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86286 ] 00:22:51.579 [2024-11-21 02:39:32.223517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.838 [2024-11-21 02:39:32.315245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:51.838 [2024-11-21 02:39:32.315451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.406 02:39:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.406 02:39:33 -- common/autotest_common.sh@862 -- # return 0 00:22:52.406 02:39:33 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.406 02:39:33 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:52.406 02:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.406 02:39:33 -- common/autotest_common.sh@10 -- # set +x 00:22:52.406 02:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.406 02:39:33 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:52.406 02:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.406 02:39:33 -- common/autotest_common.sh@10 -- # set +x 00:22:52.666 02:39:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.666 02:39:33 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:52.666 02:39:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.666 02:39:33 -- common/autotest_common.sh@10 -- # set +x 00:22:53.602 [2024-11-21 02:39:34.143355] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:53.602 [2024-11-21 02:39:34.143400] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:53.602 [2024-11-21 02:39:34.143417] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:53.602 [2024-11-21 02:39:34.229534] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:53.861 [2024-11-21 02:39:34.285098] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:53.861 [2024-11-21 02:39:34.285162] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:53.861 [2024-11-21 02:39:34.285188] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:53.861 [2024-11-21 02:39:34.285213] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:53.861 [2024-11-21 02:39:34.285252] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:53.861 02:39:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.861 02:39:34 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:53.861 02:39:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.861 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.862 02:39:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.862 02:39:34 -- common/autotest_common.sh@10 -- # set +x 00:22:53.862 [2024-11-21 02:39:34.291989] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc5a840 was disconnected and freed. delete nvme_qpair. 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.862 02:39:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.862 02:39:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.862 02:39:34 -- common/autotest_common.sh@10 -- # set +x 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.862 02:39:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:53.862 02:39:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.798 02:39:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.798 02:39:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.798 02:39:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.798 02:39:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.798 02:39:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.798 02:39:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.798 02:39:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.056 02:39:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.056 02:39:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:55.056 02:39:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.993 02:39:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.993 02:39:36 -- common/autotest_common.sh@10 -- # set +x 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.993 02:39:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:55.993 02:39:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:56.930 02:39:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.930 02:39:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.930 02:39:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.930 02:39:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.930 02:39:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.930 02:39:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.930 02:39:37 -- common/autotest_common.sh@10 -- # set +x 00:22:56.930 02:39:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.189 02:39:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:57.189 02:39:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:58.126 02:39:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.126 02:39:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.126 02:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.126 02:39:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.126 02:39:38 -- common/autotest_common.sh@10 -- # set +x 00:22:58.126 02:39:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.126 02:39:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.126 02:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.126 02:39:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:58.127 02:39:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:59.059 02:39:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:59.059 02:39:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:59.059 02:39:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:59.059 02:39:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.059 02:39:39 -- common/autotest_common.sh@10 -- # set +x 00:22:59.059 02:39:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:59.059 02:39:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:59.059 02:39:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.318 02:39:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:59.318 02:39:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:59.318 [2024-11-21 02:39:39.723299] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:59.318 [2024-11-21 02:39:39.723385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.318 [2024-11-21 02:39:39.723402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.318 [2024-11-21 02:39:39.723415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.318 [2024-11-21 02:39:39.723424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.318 [2024-11-21 02:39:39.723435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.318 [2024-11-21 02:39:39.723443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.318 [2024-11-21 02:39:39.723453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.318 [2024-11-21 02:39:39.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.318 [2024-11-21 02:39:39.723472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.318 [2024-11-21 02:39:39.723480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.318 [2024-11-21 02:39:39.723489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd19f0 is same with the state(5) to be set 00:22:59.318 [2024-11-21 02:39:39.733293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd19f0 (9): Bad file descriptor 00:22:59.318 [2024-11-21 02:39:39.743315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:00.254 02:39:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.254 02:39:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.254 02:39:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.254 02:39:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.254 02:39:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.254 02:39:40 -- common/autotest_common.sh@10 -- # set +x 00:23:00.254 02:39:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.254 [2024-11-21 02:39:40.759877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:01.188 [2024-11-21 02:39:41.783873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:01.188 [2024-11-21 02:39:41.783983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd19f0 with addr=10.0.0.2, port=4420 00:23:01.188 [2024-11-21 02:39:41.784025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd19f0 is same with the state(5) to be set 00:23:01.188 [2024-11-21 02:39:41.784084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:01.188 [2024-11-21 02:39:41.784113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:01.188 [2024-11-21 02:39:41.784136] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:01.188 [2024-11-21 02:39:41.784160] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:01.188 [2024-11-21 02:39:41.785007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd19f0 (9): Bad file descriptor 00:23:01.188 [2024-11-21 02:39:41.785096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.188 [2024-11-21 02:39:41.785160] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:01.188 [2024-11-21 02:39:41.785240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.188 [2024-11-21 02:39:41.785276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.188 [2024-11-21 02:39:41.785307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.188 [2024-11-21 02:39:41.785330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.188 [2024-11-21 02:39:41.785355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.188 [2024-11-21 02:39:41.785378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.188 [2024-11-21 02:39:41.785404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.188 [2024-11-21 02:39:41.785426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.188 [2024-11-21 02:39:41.785451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:01.188 [2024-11-21 02:39:41.785474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.188 [2024-11-21 02:39:41.785496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:01.188 [2024-11-21 02:39:41.785532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd1e00 (9): Bad file descriptor 00:23:01.188 [2024-11-21 02:39:41.786133] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:01.188 [2024-11-21 02:39:41.786191] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:01.188 02:39:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.188 02:39:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:01.188 02:39:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.622 02:39:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.622 02:39:42 -- common/autotest_common.sh@10 -- # set +x 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.622 02:39:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.622 02:39:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.622 02:39:42 -- common/autotest_common.sh@10 -- # set +x 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.622 02:39:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:02.622 02:39:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:03.203 [2024-11-21 02:39:43.795411] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:03.203 [2024-11-21 02:39:43.795436] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:03.203 [2024-11-21 02:39:43.795455] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.462 [2024-11-21 02:39:43.881512] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:03.462 [2024-11-21 02:39:43.936585] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:03.462 [2024-11-21 02:39:43.936637] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:03.462 [2024-11-21 02:39:43.936663] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:03.462 [2024-11-21 02:39:43.936681] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:03.462 [2024-11-21 02:39:43.936689] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:03.462 [2024-11-21 02:39:43.943896] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc15080 was disconnected and freed. delete nvme_qpair. 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.462 02:39:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.462 02:39:43 -- common/autotest_common.sh@10 -- # set +x 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:03.462 02:39:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:03.462 02:39:43 -- host/discovery_remove_ifc.sh@90 -- # killprocess 86286 00:23:03.462 02:39:43 -- common/autotest_common.sh@936 -- # '[' -z 86286 ']' 00:23:03.462 02:39:43 -- common/autotest_common.sh@940 -- # kill -0 86286 00:23:03.462 02:39:43 -- common/autotest_common.sh@941 -- # uname 00:23:03.462 02:39:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.462 02:39:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86286 00:23:03.463 02:39:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:03.463 killing process with pid 86286 00:23:03.463 02:39:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:03.463 02:39:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86286' 00:23:03.463 02:39:44 -- common/autotest_common.sh@955 -- # kill 86286 00:23:03.463 02:39:44 -- common/autotest_common.sh@960 -- # wait 86286 00:23:03.721 02:39:44 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:03.721 02:39:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:03.721 02:39:44 -- nvmf/common.sh@116 -- # sync 00:23:03.980 02:39:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:03.980 02:39:44 -- nvmf/common.sh@119 -- # set +e 00:23:03.980 02:39:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:03.980 02:39:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:03.980 rmmod nvme_tcp 00:23:03.980 rmmod nvme_fabrics 00:23:03.980 rmmod nvme_keyring 00:23:03.980 02:39:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:03.980 02:39:44 -- nvmf/common.sh@123 -- # set -e 00:23:03.980 02:39:44 -- nvmf/common.sh@124 -- # return 0 00:23:03.980 02:39:44 -- nvmf/common.sh@477 -- # '[' -n 86236 ']' 00:23:03.980 02:39:44 -- nvmf/common.sh@478 -- # killprocess 86236 00:23:03.981 02:39:44 -- common/autotest_common.sh@936 -- # '[' -z 86236 ']' 00:23:03.981 02:39:44 -- common/autotest_common.sh@940 -- # kill -0 86236 00:23:03.981 02:39:44 -- common/autotest_common.sh@941 -- # uname 00:23:03.981 02:39:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.981 02:39:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86236 00:23:03.981 02:39:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:03.981 02:39:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:03.981 killing process with pid 86236 00:23:03.981 02:39:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86236' 00:23:03.981 02:39:44 -- common/autotest_common.sh@955 -- # kill 86236 00:23:03.981 02:39:44 -- common/autotest_common.sh@960 -- # wait 86236 00:23:04.240 02:39:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:04.240 02:39:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:04.240 02:39:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:04.240 02:39:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.240 02:39:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:04.240 02:39:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.240 02:39:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.240 02:39:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.240 02:39:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:04.240 00:23:04.240 real 0m14.503s 00:23:04.240 user 0m24.885s 00:23:04.240 sys 0m1.532s 00:23:04.240 02:39:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:04.240 ************************************ 00:23:04.240 END TEST nvmf_discovery_remove_ifc 00:23:04.240 ************************************ 00:23:04.240 02:39:44 -- common/autotest_common.sh@10 -- # set +x 00:23:04.240 02:39:44 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:23:04.240 02:39:44 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:04.240 02:39:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:04.240 02:39:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:04.240 02:39:44 -- common/autotest_common.sh@10 -- # set +x 00:23:04.240 ************************************ 00:23:04.240 START TEST nvmf_digest 00:23:04.240 ************************************ 00:23:04.240 02:39:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:04.240 * Looking for test storage... 00:23:04.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:04.240 02:39:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:04.240 02:39:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:04.240 02:39:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:04.500 02:39:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:04.500 02:39:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:04.500 02:39:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:04.500 02:39:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:04.500 02:39:44 -- scripts/common.sh@335 -- # IFS=.-: 00:23:04.500 02:39:44 -- scripts/common.sh@335 -- # read -ra ver1 00:23:04.500 02:39:44 -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.500 02:39:44 -- scripts/common.sh@336 -- # read -ra ver2 00:23:04.500 02:39:44 -- scripts/common.sh@337 -- # local 'op=<' 00:23:04.500 02:39:44 -- scripts/common.sh@339 -- # ver1_l=2 00:23:04.500 02:39:44 -- scripts/common.sh@340 -- # ver2_l=1 00:23:04.500 02:39:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:04.500 02:39:44 -- scripts/common.sh@343 -- # case "$op" in 00:23:04.500 02:39:44 -- scripts/common.sh@344 -- # : 1 00:23:04.500 02:39:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:04.500 02:39:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.500 02:39:44 -- scripts/common.sh@364 -- # decimal 1 00:23:04.500 02:39:44 -- scripts/common.sh@352 -- # local d=1 00:23:04.500 02:39:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.500 02:39:44 -- scripts/common.sh@354 -- # echo 1 00:23:04.500 02:39:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:04.500 02:39:44 -- scripts/common.sh@365 -- # decimal 2 00:23:04.500 02:39:44 -- scripts/common.sh@352 -- # local d=2 00:23:04.500 02:39:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.500 02:39:44 -- scripts/common.sh@354 -- # echo 2 00:23:04.500 02:39:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:04.500 02:39:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:04.500 02:39:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:04.500 02:39:44 -- scripts/common.sh@367 -- # return 0 00:23:04.500 02:39:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.500 02:39:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.500 --rc genhtml_branch_coverage=1 00:23:04.500 --rc genhtml_function_coverage=1 00:23:04.500 --rc genhtml_legend=1 00:23:04.500 --rc geninfo_all_blocks=1 00:23:04.500 --rc geninfo_unexecuted_blocks=1 00:23:04.500 00:23:04.500 ' 00:23:04.500 02:39:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.500 --rc genhtml_branch_coverage=1 00:23:04.500 --rc genhtml_function_coverage=1 00:23:04.500 --rc genhtml_legend=1 00:23:04.500 --rc geninfo_all_blocks=1 00:23:04.500 --rc geninfo_unexecuted_blocks=1 00:23:04.500 00:23:04.500 ' 00:23:04.500 02:39:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.500 --rc genhtml_branch_coverage=1 00:23:04.500 --rc genhtml_function_coverage=1 00:23:04.500 --rc genhtml_legend=1 00:23:04.500 --rc geninfo_all_blocks=1 00:23:04.500 --rc geninfo_unexecuted_blocks=1 00:23:04.500 00:23:04.500 ' 00:23:04.500 02:39:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.500 --rc genhtml_branch_coverage=1 00:23:04.500 --rc genhtml_function_coverage=1 00:23:04.500 --rc genhtml_legend=1 00:23:04.500 --rc geninfo_all_blocks=1 00:23:04.500 --rc geninfo_unexecuted_blocks=1 00:23:04.500 00:23:04.500 ' 00:23:04.500 02:39:44 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.500 02:39:44 -- nvmf/common.sh@7 -- # uname -s 00:23:04.500 02:39:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.500 02:39:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.500 02:39:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.500 02:39:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.500 02:39:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.500 02:39:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.500 02:39:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.500 02:39:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.500 02:39:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.500 02:39:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.500 02:39:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:23:04.500 02:39:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:23:04.500 02:39:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.500 02:39:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.500 02:39:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.500 02:39:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.500 02:39:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.500 02:39:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.500 02:39:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.500 02:39:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.500 02:39:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.500 02:39:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.500 02:39:44 -- paths/export.sh@5 -- # export PATH 00:23:04.500 02:39:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.501 02:39:44 -- nvmf/common.sh@46 -- # : 0 00:23:04.501 02:39:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:04.501 02:39:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:04.501 02:39:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:04.501 02:39:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.501 02:39:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.501 02:39:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:04.501 02:39:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:04.501 02:39:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:04.501 02:39:44 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:04.501 02:39:44 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:04.501 02:39:44 -- host/digest.sh@16 -- # runtime=2 00:23:04.501 02:39:44 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:23:04.501 02:39:44 -- host/digest.sh@132 -- # nvmftestinit 00:23:04.501 02:39:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:04.501 02:39:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.501 02:39:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:04.501 02:39:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:04.501 02:39:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:04.501 02:39:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.501 02:39:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.501 02:39:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.501 02:39:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:04.501 02:39:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:04.501 02:39:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:04.501 02:39:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:04.501 02:39:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:04.501 02:39:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:04.501 02:39:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.501 02:39:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.501 02:39:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.501 02:39:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:04.501 02:39:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.501 02:39:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.501 02:39:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.501 02:39:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.501 02:39:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.501 02:39:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.501 02:39:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.501 02:39:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.501 02:39:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:04.501 02:39:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:04.501 Cannot find device "nvmf_tgt_br" 00:23:04.501 02:39:45 -- nvmf/common.sh@154 -- # true 00:23:04.501 02:39:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.501 Cannot find device "nvmf_tgt_br2" 00:23:04.501 02:39:45 -- nvmf/common.sh@155 -- # true 00:23:04.501 02:39:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:04.501 02:39:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:04.501 Cannot find device "nvmf_tgt_br" 00:23:04.501 02:39:45 -- nvmf/common.sh@157 -- # true 00:23:04.501 02:39:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:04.501 Cannot find device "nvmf_tgt_br2" 00:23:04.501 02:39:45 -- nvmf/common.sh@158 -- # true 00:23:04.501 02:39:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:04.501 02:39:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:04.501 02:39:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.501 02:39:45 -- nvmf/common.sh@161 -- # true 00:23:04.501 02:39:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.501 02:39:45 -- nvmf/common.sh@162 -- # true 00:23:04.501 02:39:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:04.501 02:39:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:04.760 02:39:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:04.760 02:39:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:04.760 02:39:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:04.760 02:39:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:04.760 02:39:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:04.760 02:39:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:04.760 02:39:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:04.760 02:39:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:04.760 02:39:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:04.760 02:39:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:04.760 02:39:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:04.760 02:39:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:04.760 02:39:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:04.760 02:39:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:04.760 02:39:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:04.760 02:39:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:04.760 02:39:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:04.760 02:39:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:04.760 02:39:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:04.760 02:39:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:04.760 02:39:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:04.760 02:39:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:04.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:23:04.760 00:23:04.760 --- 10.0.0.2 ping statistics --- 00:23:04.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.760 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:04.760 02:39:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:04.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:04.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:04.760 00:23:04.760 --- 10.0.0.3 ping statistics --- 00:23:04.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.760 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:04.760 02:39:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:04.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:04.760 00:23:04.760 --- 10.0.0.1 ping statistics --- 00:23:04.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.760 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:04.760 02:39:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.760 02:39:45 -- nvmf/common.sh@421 -- # return 0 00:23:04.760 02:39:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:04.760 02:39:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.760 02:39:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:04.760 02:39:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:04.760 02:39:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.760 02:39:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:04.760 02:39:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:04.760 02:39:45 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:04.760 02:39:45 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:23:04.760 02:39:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:04.760 02:39:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:04.760 02:39:45 -- common/autotest_common.sh@10 -- # set +x 00:23:04.760 ************************************ 00:23:04.760 START TEST nvmf_digest_clean 00:23:04.760 ************************************ 00:23:04.760 02:39:45 -- common/autotest_common.sh@1114 -- # run_digest 00:23:04.760 02:39:45 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:23:04.760 02:39:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:04.760 02:39:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.760 02:39:45 -- common/autotest_common.sh@10 -- # set +x 00:23:04.760 02:39:45 -- nvmf/common.sh@469 -- # nvmfpid=86709 00:23:04.760 02:39:45 -- nvmf/common.sh@470 -- # waitforlisten 86709 00:23:04.760 02:39:45 -- common/autotest_common.sh@829 -- # '[' -z 86709 ']' 00:23:04.760 02:39:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.760 02:39:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.760 02:39:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.760 02:39:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.760 02:39:45 -- common/autotest_common.sh@10 -- # set +x 00:23:04.760 02:39:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:05.018 [2024-11-21 02:39:45.434600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:05.018 [2024-11-21 02:39:45.434695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.018 [2024-11-21 02:39:45.575617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.277 [2024-11-21 02:39:45.686369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:05.277 [2024-11-21 02:39:45.686563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.277 [2024-11-21 02:39:45.686582] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.277 [2024-11-21 02:39:45.686594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.277 [2024-11-21 02:39:45.686643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.847 02:39:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.847 02:39:46 -- common/autotest_common.sh@862 -- # return 0 00:23:05.847 02:39:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:05.847 02:39:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.847 02:39:46 -- common/autotest_common.sh@10 -- # set +x 00:23:05.847 02:39:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.847 02:39:46 -- host/digest.sh@120 -- # common_target_config 00:23:05.847 02:39:46 -- host/digest.sh@43 -- # rpc_cmd 00:23:05.847 02:39:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.847 02:39:46 -- common/autotest_common.sh@10 -- # set +x 00:23:06.106 null0 00:23:06.106 [2024-11-21 02:39:46.525969] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.106 [2024-11-21 02:39:46.550157] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.106 02:39:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.106 02:39:46 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:23:06.106 02:39:46 -- host/digest.sh@77 -- # local rw bs qd 00:23:06.106 02:39:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:06.106 02:39:46 -- host/digest.sh@80 -- # rw=randread 00:23:06.106 02:39:46 -- host/digest.sh@80 -- # bs=4096 00:23:06.106 02:39:46 -- host/digest.sh@80 -- # qd=128 00:23:06.106 02:39:46 -- host/digest.sh@82 -- # bperfpid=86759 00:23:06.106 02:39:46 -- host/digest.sh@83 -- # waitforlisten 86759 /var/tmp/bperf.sock 00:23:06.106 02:39:46 -- common/autotest_common.sh@829 -- # '[' -z 86759 ']' 00:23:06.106 02:39:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:06.106 02:39:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:06.106 02:39:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.106 02:39:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:06.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:06.106 02:39:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.106 02:39:46 -- common/autotest_common.sh@10 -- # set +x 00:23:06.106 [2024-11-21 02:39:46.619698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:06.106 [2024-11-21 02:39:46.620043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86759 ] 00:23:06.366 [2024-11-21 02:39:46.761687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.366 [2024-11-21 02:39:46.870691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.303 02:39:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.303 02:39:47 -- common/autotest_common.sh@862 -- # return 0 00:23:07.303 02:39:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:07.303 02:39:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:07.303 02:39:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:07.561 02:39:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.561 02:39:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:07.820 nvme0n1 00:23:07.820 02:39:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:07.820 02:39:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:07.820 Running I/O for 2 seconds... 00:23:10.362 00:23:10.363 Latency(us) 00:23:10.363 [2024-11-21T02:39:51.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.363 [2024-11-21T02:39:51.010Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:10.363 nvme0n1 : 2.00 23270.83 90.90 0.00 0.00 5493.27 2025.66 17277.67 00:23:10.363 [2024-11-21T02:39:51.010Z] =================================================================================================================== 00:23:10.363 [2024-11-21T02:39:51.010Z] Total : 23270.83 90.90 0.00 0.00 5493.27 2025.66 17277.67 00:23:10.363 0 00:23:10.363 02:39:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:10.363 02:39:50 -- host/digest.sh@92 -- # get_accel_stats 00:23:10.363 02:39:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:10.363 02:39:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:10.363 | select(.opcode=="crc32c") 00:23:10.363 | "\(.module_name) \(.executed)"' 00:23:10.363 02:39:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:10.363 02:39:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:10.363 02:39:50 -- host/digest.sh@93 -- # exp_module=software 00:23:10.363 02:39:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:10.363 02:39:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:10.363 02:39:50 -- host/digest.sh@97 -- # killprocess 86759 00:23:10.363 02:39:50 -- common/autotest_common.sh@936 -- # '[' -z 86759 ']' 00:23:10.363 02:39:50 -- common/autotest_common.sh@940 -- # kill -0 86759 00:23:10.363 02:39:50 -- common/autotest_common.sh@941 -- # uname 00:23:10.363 02:39:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:10.363 02:39:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86759 00:23:10.363 02:39:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:10.363 02:39:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:10.363 killing process with pid 86759 00:23:10.363 02:39:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86759' 00:23:10.363 02:39:50 -- common/autotest_common.sh@955 -- # kill 86759 00:23:10.363 Received shutdown signal, test time was about 2.000000 seconds 00:23:10.363 00:23:10.363 Latency(us) 00:23:10.363 [2024-11-21T02:39:51.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.363 [2024-11-21T02:39:51.010Z] =================================================================================================================== 00:23:10.363 [2024-11-21T02:39:51.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.363 02:39:50 -- common/autotest_common.sh@960 -- # wait 86759 00:23:10.363 02:39:50 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:23:10.363 02:39:50 -- host/digest.sh@77 -- # local rw bs qd 00:23:10.363 02:39:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:10.363 02:39:50 -- host/digest.sh@80 -- # rw=randread 00:23:10.363 02:39:50 -- host/digest.sh@80 -- # bs=131072 00:23:10.363 02:39:50 -- host/digest.sh@80 -- # qd=16 00:23:10.363 02:39:50 -- host/digest.sh@82 -- # bperfpid=86855 00:23:10.363 02:39:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:10.363 02:39:50 -- host/digest.sh@83 -- # waitforlisten 86855 /var/tmp/bperf.sock 00:23:10.363 02:39:50 -- common/autotest_common.sh@829 -- # '[' -z 86855 ']' 00:23:10.363 02:39:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:10.363 02:39:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:10.363 02:39:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:10.363 02:39:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.363 02:39:50 -- common/autotest_common.sh@10 -- # set +x 00:23:10.363 [2024-11-21 02:39:50.979455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:10.363 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:10.363 Zero copy mechanism will not be used. 00:23:10.363 [2024-11-21 02:39:50.980166] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86855 ] 00:23:10.622 [2024-11-21 02:39:51.115985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.622 [2024-11-21 02:39:51.188388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.558 02:39:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.558 02:39:51 -- common/autotest_common.sh@862 -- # return 0 00:23:11.558 02:39:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:11.558 02:39:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:11.558 02:39:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:11.817 02:39:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:11.817 02:39:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.076 nvme0n1 00:23:12.076 02:39:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:12.076 02:39:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:12.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:12.076 Zero copy mechanism will not be used. 00:23:12.076 Running I/O for 2 seconds... 00:23:14.608 00:23:14.608 Latency(us) 00:23:14.608 [2024-11-21T02:39:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.608 [2024-11-21T02:39:55.256Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:14.609 nvme0n1 : 2.00 9068.74 1133.59 0.00 0.00 1761.59 633.02 8877.15 00:23:14.609 [2024-11-21T02:39:55.256Z] =================================================================================================================== 00:23:14.609 [2024-11-21T02:39:55.256Z] Total : 9068.74 1133.59 0.00 0.00 1761.59 633.02 8877.15 00:23:14.609 0 00:23:14.609 02:39:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:14.609 02:39:54 -- host/digest.sh@92 -- # get_accel_stats 00:23:14.609 02:39:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:14.609 02:39:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:14.609 | select(.opcode=="crc32c") 00:23:14.609 | "\(.module_name) \(.executed)"' 00:23:14.609 02:39:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:14.609 02:39:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:14.609 02:39:54 -- host/digest.sh@93 -- # exp_module=software 00:23:14.609 02:39:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:14.609 02:39:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:14.609 02:39:54 -- host/digest.sh@97 -- # killprocess 86855 00:23:14.609 02:39:54 -- common/autotest_common.sh@936 -- # '[' -z 86855 ']' 00:23:14.609 02:39:54 -- common/autotest_common.sh@940 -- # kill -0 86855 00:23:14.609 02:39:54 -- common/autotest_common.sh@941 -- # uname 00:23:14.609 02:39:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:14.609 02:39:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86855 00:23:14.609 02:39:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:14.609 02:39:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:14.609 killing process with pid 86855 00:23:14.609 02:39:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86855' 00:23:14.609 02:39:54 -- common/autotest_common.sh@955 -- # kill 86855 00:23:14.609 Received shutdown signal, test time was about 2.000000 seconds 00:23:14.609 00:23:14.609 Latency(us) 00:23:14.609 [2024-11-21T02:39:55.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.609 [2024-11-21T02:39:55.256Z] =================================================================================================================== 00:23:14.609 [2024-11-21T02:39:55.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.609 02:39:54 -- common/autotest_common.sh@960 -- # wait 86855 00:23:14.609 02:39:55 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:23:14.609 02:39:55 -- host/digest.sh@77 -- # local rw bs qd 00:23:14.609 02:39:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:14.609 02:39:55 -- host/digest.sh@80 -- # rw=randwrite 00:23:14.609 02:39:55 -- host/digest.sh@80 -- # bs=4096 00:23:14.609 02:39:55 -- host/digest.sh@80 -- # qd=128 00:23:14.609 02:39:55 -- host/digest.sh@82 -- # bperfpid=86940 00:23:14.609 02:39:55 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:14.609 02:39:55 -- host/digest.sh@83 -- # waitforlisten 86940 /var/tmp/bperf.sock 00:23:14.609 02:39:55 -- common/autotest_common.sh@829 -- # '[' -z 86940 ']' 00:23:14.609 02:39:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:14.609 02:39:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:14.609 02:39:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:14.609 02:39:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.609 02:39:55 -- common/autotest_common.sh@10 -- # set +x 00:23:14.868 [2024-11-21 02:39:55.266406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.868 [2024-11-21 02:39:55.266499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86940 ] 00:23:14.868 [2024-11-21 02:39:55.402315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.868 [2024-11-21 02:39:55.483153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.805 02:39:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.805 02:39:56 -- common/autotest_common.sh@862 -- # return 0 00:23:15.805 02:39:56 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:15.805 02:39:56 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:15.805 02:39:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:16.064 02:39:56 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.064 02:39:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.323 nvme0n1 00:23:16.323 02:39:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:16.323 02:39:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:16.323 Running I/O for 2 seconds... 00:23:18.857 00:23:18.857 Latency(us) 00:23:18.857 [2024-11-21T02:39:59.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.857 [2024-11-21T02:39:59.504Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:18.857 nvme0n1 : 2.01 29173.15 113.96 0.00 0.00 4382.90 1854.37 7923.90 00:23:18.857 [2024-11-21T02:39:59.504Z] =================================================================================================================== 00:23:18.857 [2024-11-21T02:39:59.504Z] Total : 29173.15 113.96 0.00 0.00 4382.90 1854.37 7923.90 00:23:18.857 0 00:23:18.857 02:39:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:18.857 02:39:58 -- host/digest.sh@92 -- # get_accel_stats 00:23:18.857 02:39:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:18.857 02:39:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:18.857 02:39:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:18.857 | select(.opcode=="crc32c") 00:23:18.857 | "\(.module_name) \(.executed)"' 00:23:18.857 02:39:59 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:18.857 02:39:59 -- host/digest.sh@93 -- # exp_module=software 00:23:18.857 02:39:59 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:18.857 02:39:59 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:18.857 02:39:59 -- host/digest.sh@97 -- # killprocess 86940 00:23:18.857 02:39:59 -- common/autotest_common.sh@936 -- # '[' -z 86940 ']' 00:23:18.857 02:39:59 -- common/autotest_common.sh@940 -- # kill -0 86940 00:23:18.857 02:39:59 -- common/autotest_common.sh@941 -- # uname 00:23:18.857 02:39:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.857 02:39:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86940 00:23:18.857 02:39:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.857 02:39:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.857 killing process with pid 86940 00:23:18.857 02:39:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86940' 00:23:18.857 Received shutdown signal, test time was about 2.000000 seconds 00:23:18.857 00:23:18.857 Latency(us) 00:23:18.857 [2024-11-21T02:39:59.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.857 [2024-11-21T02:39:59.504Z] =================================================================================================================== 00:23:18.857 [2024-11-21T02:39:59.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.857 02:39:59 -- common/autotest_common.sh@955 -- # kill 86940 00:23:18.857 02:39:59 -- common/autotest_common.sh@960 -- # wait 86940 00:23:18.857 02:39:59 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:23:18.857 02:39:59 -- host/digest.sh@77 -- # local rw bs qd 00:23:18.857 02:39:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:18.857 02:39:59 -- host/digest.sh@80 -- # rw=randwrite 00:23:18.857 02:39:59 -- host/digest.sh@80 -- # bs=131072 00:23:18.857 02:39:59 -- host/digest.sh@80 -- # qd=16 00:23:18.857 02:39:59 -- host/digest.sh@82 -- # bperfpid=87031 00:23:18.857 02:39:59 -- host/digest.sh@83 -- # waitforlisten 87031 /var/tmp/bperf.sock 00:23:18.857 02:39:59 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:18.857 02:39:59 -- common/autotest_common.sh@829 -- # '[' -z 87031 ']' 00:23:18.857 02:39:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.857 02:39:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.857 02:39:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.857 02:39:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.857 02:39:59 -- common/autotest_common.sh@10 -- # set +x 00:23:19.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:19.115 Zero copy mechanism will not be used. 00:23:19.115 [2024-11-21 02:39:59.531705] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:19.115 [2024-11-21 02:39:59.531847] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87031 ] 00:23:19.115 [2024-11-21 02:39:59.671103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.115 [2024-11-21 02:39:59.737485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.052 02:40:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.052 02:40:00 -- common/autotest_common.sh@862 -- # return 0 00:23:20.052 02:40:00 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:20.052 02:40:00 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:20.052 02:40:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:20.311 02:40:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.311 02:40:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.569 nvme0n1 00:23:20.569 02:40:01 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:20.569 02:40:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:20.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:20.569 Zero copy mechanism will not be used. 00:23:20.569 Running I/O for 2 seconds... 00:23:23.103 00:23:23.103 Latency(us) 00:23:23.103 [2024-11-21T02:40:03.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.103 [2024-11-21T02:40:03.750Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:23.103 nvme0n1 : 2.00 8142.89 1017.86 0.00 0.00 1960.93 1429.88 7000.44 00:23:23.103 [2024-11-21T02:40:03.750Z] =================================================================================================================== 00:23:23.103 [2024-11-21T02:40:03.750Z] Total : 8142.89 1017.86 0.00 0.00 1960.93 1429.88 7000.44 00:23:23.103 0 00:23:23.103 02:40:03 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:23.103 02:40:03 -- host/digest.sh@92 -- # get_accel_stats 00:23:23.103 02:40:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:23.103 02:40:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:23.103 02:40:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:23.103 | select(.opcode=="crc32c") 00:23:23.103 | "\(.module_name) \(.executed)"' 00:23:23.103 02:40:03 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:23.103 02:40:03 -- host/digest.sh@93 -- # exp_module=software 00:23:23.103 02:40:03 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:23.103 02:40:03 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:23.103 02:40:03 -- host/digest.sh@97 -- # killprocess 87031 00:23:23.103 02:40:03 -- common/autotest_common.sh@936 -- # '[' -z 87031 ']' 00:23:23.103 02:40:03 -- common/autotest_common.sh@940 -- # kill -0 87031 00:23:23.103 02:40:03 -- common/autotest_common.sh@941 -- # uname 00:23:23.103 02:40:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.103 02:40:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87031 00:23:23.103 02:40:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:23.103 02:40:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:23.103 killing process with pid 87031 00:23:23.103 02:40:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87031' 00:23:23.103 02:40:03 -- common/autotest_common.sh@955 -- # kill 87031 00:23:23.103 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.103 00:23:23.103 Latency(us) 00:23:23.103 [2024-11-21T02:40:03.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.103 [2024-11-21T02:40:03.750Z] =================================================================================================================== 00:23:23.103 [2024-11-21T02:40:03.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.103 02:40:03 -- common/autotest_common.sh@960 -- # wait 87031 00:23:23.103 02:40:03 -- host/digest.sh@126 -- # killprocess 86709 00:23:23.103 02:40:03 -- common/autotest_common.sh@936 -- # '[' -z 86709 ']' 00:23:23.103 02:40:03 -- common/autotest_common.sh@940 -- # kill -0 86709 00:23:23.103 02:40:03 -- common/autotest_common.sh@941 -- # uname 00:23:23.103 02:40:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.103 02:40:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86709 00:23:23.103 02:40:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:23.103 02:40:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:23.103 killing process with pid 86709 00:23:23.103 02:40:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86709' 00:23:23.103 02:40:03 -- common/autotest_common.sh@955 -- # kill 86709 00:23:23.103 02:40:03 -- common/autotest_common.sh@960 -- # wait 86709 00:23:23.671 00:23:23.671 real 0m18.665s 00:23:23.671 user 0m34.247s 00:23:23.671 sys 0m5.446s 00:23:23.671 02:40:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:23.671 02:40:04 -- common/autotest_common.sh@10 -- # set +x 00:23:23.671 ************************************ 00:23:23.671 END TEST nvmf_digest_clean 00:23:23.671 ************************************ 00:23:23.671 02:40:04 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:23.671 02:40:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:23.671 02:40:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:23.671 02:40:04 -- common/autotest_common.sh@10 -- # set +x 00:23:23.671 ************************************ 00:23:23.671 START TEST nvmf_digest_error 00:23:23.671 ************************************ 00:23:23.671 02:40:04 -- common/autotest_common.sh@1114 -- # run_digest_error 00:23:23.671 02:40:04 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:23.671 02:40:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:23.671 02:40:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.671 02:40:04 -- common/autotest_common.sh@10 -- # set +x 00:23:23.671 02:40:04 -- nvmf/common.sh@469 -- # nvmfpid=87150 00:23:23.671 02:40:04 -- nvmf/common.sh@470 -- # waitforlisten 87150 00:23:23.671 02:40:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:23.671 02:40:04 -- common/autotest_common.sh@829 -- # '[' -z 87150 ']' 00:23:23.671 02:40:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.671 02:40:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.671 02:40:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.671 02:40:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.671 02:40:04 -- common/autotest_common.sh@10 -- # set +x 00:23:23.671 [2024-11-21 02:40:04.150609] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:23.671 [2024-11-21 02:40:04.150705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.671 [2024-11-21 02:40:04.288018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.930 [2024-11-21 02:40:04.363971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:23.930 [2024-11-21 02:40:04.364131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.930 [2024-11-21 02:40:04.364143] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.930 [2024-11-21 02:40:04.364151] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.930 [2024-11-21 02:40:04.364187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.864 02:40:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.864 02:40:05 -- common/autotest_common.sh@862 -- # return 0 00:23:24.864 02:40:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:24.864 02:40:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.864 02:40:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.864 02:40:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.864 02:40:05 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:24.864 02:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.864 02:40:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.864 [2024-11-21 02:40:05.196661] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:24.864 02:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.864 02:40:05 -- host/digest.sh@104 -- # common_target_config 00:23:24.864 02:40:05 -- host/digest.sh@43 -- # rpc_cmd 00:23:24.864 02:40:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.864 02:40:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.864 null0 00:23:24.864 [2024-11-21 02:40:05.330368] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.864 [2024-11-21 02:40:05.354522] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:24.864 02:40:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.864 02:40:05 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:24.864 02:40:05 -- host/digest.sh@54 -- # local rw bs qd 00:23:24.864 02:40:05 -- host/digest.sh@56 -- # rw=randread 00:23:24.864 02:40:05 -- host/digest.sh@56 -- # bs=4096 00:23:24.864 02:40:05 -- host/digest.sh@56 -- # qd=128 00:23:24.864 02:40:05 -- host/digest.sh@58 -- # bperfpid=87194 00:23:24.864 02:40:05 -- host/digest.sh@60 -- # waitforlisten 87194 /var/tmp/bperf.sock 00:23:24.864 02:40:05 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:24.864 02:40:05 -- common/autotest_common.sh@829 -- # '[' -z 87194 ']' 00:23:24.864 02:40:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:24.864 02:40:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.864 02:40:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:24.864 02:40:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.864 02:40:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.864 [2024-11-21 02:40:05.406031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:24.864 [2024-11-21 02:40:05.406130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87194 ] 00:23:25.122 [2024-11-21 02:40:05.539226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.122 [2024-11-21 02:40:05.640936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.057 02:40:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.057 02:40:06 -- common/autotest_common.sh@862 -- # return 0 00:23:26.057 02:40:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:26.057 02:40:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:26.057 02:40:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:26.057 02:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.057 02:40:06 -- common/autotest_common.sh@10 -- # set +x 00:23:26.057 02:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.057 02:40:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:26.057 02:40:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:26.316 nvme0n1 00:23:26.316 02:40:06 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:26.316 02:40:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.316 02:40:06 -- common/autotest_common.sh@10 -- # set +x 00:23:26.316 02:40:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.316 02:40:06 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:26.316 02:40:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:26.316 Running I/O for 2 seconds... 00:23:26.316 [2024-11-21 02:40:06.915103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.316 [2024-11-21 02:40:06.915175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-21 02:40:06.915188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.316 [2024-11-21 02:40:06.925083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.316 [2024-11-21 02:40:06.925115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-21 02:40:06.925126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.316 [2024-11-21 02:40:06.934888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.316 [2024-11-21 02:40:06.934935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-21 02:40:06.934947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.316 [2024-11-21 02:40:06.944754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.316 [2024-11-21 02:40:06.944784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-21 02:40:06.944794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.316 [2024-11-21 02:40:06.957751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.316 [2024-11-21 02:40:06.957791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-21 02:40:06.957803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:06.970454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:06.970501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:06.970512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:06.982311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:06.982390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:06.982403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:06.991721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:06.991763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:06.991774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.003565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.003597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.003608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.012759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.012790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.012801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.023282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.023313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.023324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.031391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.031422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.031432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.041153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.041200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.041212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.050849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.050912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.050925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.063283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.063314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.063324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.077080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.077143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.077154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.087537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.087568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.576 [2024-11-21 02:40:07.087579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.576 [2024-11-21 02:40:07.100196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.576 [2024-11-21 02:40:07.100227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.100238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.112338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.112369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.112379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.120491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.120522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.120533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.132829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.132859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.132869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.144689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.144721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.144732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.156803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.156833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.156843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.168765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.168795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.168806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.181646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.181678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.193050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.193083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.193094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.202215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.202263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.202276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.577 [2024-11-21 02:40:07.213735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.577 [2024-11-21 02:40:07.213775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.577 [2024-11-21 02:40:07.213785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.223523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.223571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.223583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.233354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.233386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.233396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.243306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.243338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.243349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.254113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.254161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.254173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.263816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.263845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.263855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.276309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.276340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.276351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.288128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.288158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.288169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.299634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.299666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.299677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.310781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.310825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.310836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.320222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.320268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.320279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.333519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.333549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.333560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.345832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.345860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.345870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.358147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.358177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.358189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.366900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.366928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.366939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.376474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.376503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.376515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.386745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.386782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.386794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.397481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.397509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.397520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.407415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.407444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.407455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.417919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.417948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.417958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.426885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.426913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.837 [2024-11-21 02:40:07.426923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.837 [2024-11-21 02:40:07.436585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.837 [2024-11-21 02:40:07.436614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.838 [2024-11-21 02:40:07.436625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.838 [2024-11-21 02:40:07.446254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.838 [2024-11-21 02:40:07.446284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.838 [2024-11-21 02:40:07.446297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.838 [2024-11-21 02:40:07.455838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.838 [2024-11-21 02:40:07.455866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.838 [2024-11-21 02:40:07.455877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:26.838 [2024-11-21 02:40:07.468697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:26.838 [2024-11-21 02:40:07.468728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.838 [2024-11-21 02:40:07.468750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.481485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.481533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.481544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.494855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.494888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.494899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.508014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.508064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.508076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.517305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.517354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.517366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.527391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.527442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.527454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.536631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.536681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.536693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.546031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.546223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.546245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.558183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.558237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.558249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.569680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.569715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.569743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.581490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.581526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.581554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.594455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.594489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.594515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.605804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.605859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.605873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.614993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.615027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.615054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.627448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.627483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.627511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.640364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.640399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.640426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.652324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.652358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.652385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.664792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.664839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.664850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.676484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.676520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.676547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.685940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.097 [2024-11-21 02:40:07.685973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.097 [2024-11-21 02:40:07.686000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.097 [2024-11-21 02:40:07.697633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.098 [2024-11-21 02:40:07.697683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.098 [2024-11-21 02:40:07.697711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.098 [2024-11-21 02:40:07.710535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.098 [2024-11-21 02:40:07.710569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.098 [2024-11-21 02:40:07.710597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.098 [2024-11-21 02:40:07.719805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.098 [2024-11-21 02:40:07.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.098 [2024-11-21 02:40:07.719865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.098 [2024-11-21 02:40:07.730300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.098 [2024-11-21 02:40:07.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.098 [2024-11-21 02:40:07.730366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.098 [2024-11-21 02:40:07.738903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.098 [2024-11-21 02:40:07.738968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.098 [2024-11-21 02:40:07.738995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.750678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.750711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.750738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.763073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.763107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.763135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.772033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.772067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.772094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.782511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.782546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.782573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.791872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.791932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.803472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.803507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.803534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.815194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.815229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.815256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.827592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.827627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.827654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.839307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.357 [2024-11-21 02:40:07.839342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.357 [2024-11-21 02:40:07.839369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.357 [2024-11-21 02:40:07.849732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.849777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.849804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.859021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.859054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.859082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.869052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.869085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.869113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.878887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.878922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.878948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.888466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.888501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.888528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.897484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.897518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.897545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.906877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.906911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.906937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.916319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.916353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.916380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.926629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.926663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.926690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.936030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.936064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.936091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.947793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.947827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.947855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.960339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.960374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.960402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.971890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.971925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.971951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.984600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.984634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.984662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.358 [2024-11-21 02:40:07.996090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.358 [2024-11-21 02:40:07.996125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.358 [2024-11-21 02:40:07.996152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.007291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.007352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.016835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.016869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.016895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.025708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.025767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.025780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.035956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.035990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.036017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.045119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.045153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.045180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.054459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.054506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.054518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.064738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.064831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.064844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.074483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.074517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.074544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.084234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.084269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.084296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.095989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.096023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.096051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.108910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.108944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.618 [2024-11-21 02:40:08.108971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.618 [2024-11-21 02:40:08.118277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.618 [2024-11-21 02:40:08.118329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.118341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.127722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.127770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.127797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.138310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.138362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.138403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.148714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.148775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.159324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.159358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.159386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.168144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.168178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.168205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.176266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.176301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.176328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.188559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.188594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.188621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.200484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.200518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.200546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.212429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.212464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.212491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.222876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.222928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.222940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.235785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.235821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.235847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.245074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.245108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.245134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.619 [2024-11-21 02:40:08.255715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.619 [2024-11-21 02:40:08.255762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.619 [2024-11-21 02:40:08.255789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.879 [2024-11-21 02:40:08.265653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.879 [2024-11-21 02:40:08.265720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.879 [2024-11-21 02:40:08.265762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.879 [2024-11-21 02:40:08.276720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.879 [2024-11-21 02:40:08.276765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.879 [2024-11-21 02:40:08.276793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.879 [2024-11-21 02:40:08.285903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.879 [2024-11-21 02:40:08.285937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.879 [2024-11-21 02:40:08.285964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.879 [2024-11-21 02:40:08.296261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.879 [2024-11-21 02:40:08.296296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.879 [2024-11-21 02:40:08.296323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.304892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.304927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.304954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.313520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.313555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.313582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.322618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.322651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.322679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.333560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.333593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.333620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.345051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.345086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.345114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.356768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.356802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.356828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.366348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.366414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.366442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.376066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.376099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.376127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.386004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.386060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.386088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.395861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.395893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.395920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.405106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.405141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.405168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.415099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.415165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.415192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.423999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.424033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.424060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.433849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.433882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.433909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.444917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.444952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.444979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.457323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.457358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.457385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.465924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.465957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.465984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.478868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.478901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.478927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.489922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.489957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.489984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.499977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.500028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.500040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.510765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.510808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.510835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:27.880 [2024-11-21 02:40:08.520496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:27.880 [2024-11-21 02:40:08.520530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.880 [2024-11-21 02:40:08.520557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.140 [2024-11-21 02:40:08.530733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.530775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.541282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.541316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.541344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.550698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.550788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.550803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.559717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.559796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.559824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.572514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.572549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.572577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.584367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.584402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.584429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.595649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.595684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.595712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.604732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.604775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.604802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.618011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.618086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.618099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.628791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.628825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.628852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.639131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.639166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.639193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.649043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.649079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.649106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.658331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.658381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.658408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.666980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.667030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.676859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.676893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.676920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.688853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.688886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.688913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.699110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.699144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.699170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.708802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.708852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.708879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.718518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.718552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.729051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.729103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.729145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.739728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.739772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.739799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.750216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.750295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.760575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.760609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.760636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.771855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.771906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.771918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.141 [2024-11-21 02:40:08.780692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.141 [2024-11-21 02:40:08.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.141 [2024-11-21 02:40:08.780800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.791825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.791876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.791887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.801185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.801220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.812378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.812411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.812439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.823248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.823283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.823310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.831590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.831625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.831652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.843889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.843939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.843951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.856645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.856679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.856707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.868300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.868334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.868362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.880268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.880304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.880331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 [2024-11-21 02:40:08.891150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf4df50) 00:23:28.401 [2024-11-21 02:40:08.891200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.401 [2024-11-21 02:40:08.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.401 00:23:28.401 Latency(us) 00:23:28.401 [2024-11-21T02:40:09.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.401 [2024-11-21T02:40:09.048Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:28.401 nvme0n1 : 2.00 23780.77 92.89 0.00 0.00 5377.61 2472.49 17515.99 00:23:28.401 [2024-11-21T02:40:09.048Z] =================================================================================================================== 00:23:28.401 [2024-11-21T02:40:09.048Z] Total : 23780.77 92.89 0.00 0.00 5377.61 2472.49 17515.99 00:23:28.402 0 00:23:28.402 02:40:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:28.402 02:40:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:28.402 02:40:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:28.402 | .driver_specific 00:23:28.402 | .nvme_error 00:23:28.402 | .status_code 00:23:28.402 | .command_transient_transport_error' 00:23:28.402 02:40:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:28.662 02:40:09 -- host/digest.sh@71 -- # (( 186 > 0 )) 00:23:28.662 02:40:09 -- host/digest.sh@73 -- # killprocess 87194 00:23:28.662 02:40:09 -- common/autotest_common.sh@936 -- # '[' -z 87194 ']' 00:23:28.662 02:40:09 -- common/autotest_common.sh@940 -- # kill -0 87194 00:23:28.662 02:40:09 -- common/autotest_common.sh@941 -- # uname 00:23:28.662 02:40:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:28.662 02:40:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87194 00:23:28.662 02:40:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:28.662 02:40:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:28.662 killing process with pid 87194 00:23:28.662 02:40:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87194' 00:23:28.662 02:40:09 -- common/autotest_common.sh@955 -- # kill 87194 00:23:28.662 Received shutdown signal, test time was about 2.000000 seconds 00:23:28.662 00:23:28.662 Latency(us) 00:23:28.662 [2024-11-21T02:40:09.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.662 [2024-11-21T02:40:09.309Z] =================================================================================================================== 00:23:28.662 [2024-11-21T02:40:09.309Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.662 02:40:09 -- common/autotest_common.sh@960 -- # wait 87194 00:23:28.925 02:40:09 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:28.925 02:40:09 -- host/digest.sh@54 -- # local rw bs qd 00:23:28.925 02:40:09 -- host/digest.sh@56 -- # rw=randread 00:23:28.925 02:40:09 -- host/digest.sh@56 -- # bs=131072 00:23:28.925 02:40:09 -- host/digest.sh@56 -- # qd=16 00:23:28.925 02:40:09 -- host/digest.sh@58 -- # bperfpid=87279 00:23:28.925 02:40:09 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:28.925 02:40:09 -- host/digest.sh@60 -- # waitforlisten 87279 /var/tmp/bperf.sock 00:23:28.925 02:40:09 -- common/autotest_common.sh@829 -- # '[' -z 87279 ']' 00:23:28.925 02:40:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:28.925 02:40:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:28.925 02:40:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:28.925 02:40:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.925 02:40:09 -- common/autotest_common.sh@10 -- # set +x 00:23:28.925 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:28.925 Zero copy mechanism will not be used. 00:23:28.925 [2024-11-21 02:40:09.509230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:28.925 [2024-11-21 02:40:09.509319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87279 ] 00:23:29.184 [2024-11-21 02:40:09.638243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.184 [2024-11-21 02:40:09.716060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.122 02:40:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.122 02:40:10 -- common/autotest_common.sh@862 -- # return 0 00:23:30.122 02:40:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:30.122 02:40:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:30.122 02:40:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:30.122 02:40:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.122 02:40:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.381 02:40:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.381 02:40:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.381 02:40:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.640 nvme0n1 00:23:30.640 02:40:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:30.640 02:40:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.640 02:40:11 -- common/autotest_common.sh@10 -- # set +x 00:23:30.640 02:40:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.640 02:40:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:30.640 02:40:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:30.640 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:30.640 Zero copy mechanism will not be used. 00:23:30.640 Running I/O for 2 seconds... 00:23:30.640 [2024-11-21 02:40:11.210452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.640 [2024-11-21 02:40:11.210493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.640 [2024-11-21 02:40:11.210508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.640 [2024-11-21 02:40:11.214951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.640 [2024-11-21 02:40:11.214984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.640 [2024-11-21 02:40:11.214996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.640 [2024-11-21 02:40:11.219225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.640 [2024-11-21 02:40:11.219256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.219267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.223357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.223388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.223399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.226501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.226530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.226541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.229536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.229567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.233289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.233319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.233330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.236885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.236914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.236925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.241333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.241362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.241373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.245107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.245138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.245149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.249538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.249569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.249580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.252878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.252909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.252920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.256389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.256419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.256430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.260200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.260231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.260242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.264071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.264101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.264111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.267534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.267565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.267577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.271280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.271311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.271322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.274951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.274997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.275009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.278424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.278470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.278497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.641 [2024-11-21 02:40:11.282755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.641 [2024-11-21 02:40:11.282829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.641 [2024-11-21 02:40:11.282841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.286949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.286994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.287005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.290997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.291043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.291055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.294539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.294569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.294579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.298702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.298732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.298770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.302176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.302224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.302238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.305903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.305932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.305943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.310013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.310050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.310078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.313472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.313502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.313513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.317259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.317289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.317300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.321034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.321064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.321074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.324541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.324571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.328265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.328294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.328304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.332234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.332265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.332276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.335886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.335915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.335925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.339928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.902 [2024-11-21 02:40:11.339956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.902 [2024-11-21 02:40:11.339967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.902 [2024-11-21 02:40:11.343645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.343673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.343683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.347301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.347331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.347342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.351246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.351277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.351288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.355436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.355466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.355477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.359492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.359522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.359533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.363579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.363608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.363619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.366826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.366855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.366865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.370003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.370072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.370084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.373538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.373567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.373578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.376933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.376977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.376988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.380786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.380814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.380824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.384711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.384750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.384763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.388547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.388576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.388586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.392079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.392107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.392118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.395884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.395913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.395924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.399547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.399592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.399603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.403262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.403293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.403304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.407211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.407240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.407251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.410747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.410787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.410798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.414692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.414722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.414733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.418209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.418256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.418268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.421491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.421521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.421531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.425031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.425061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.425072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.428503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.428533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.428544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.431978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.432007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.432018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.435464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.435493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.435503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.439778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.903 [2024-11-21 02:40:11.439807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.903 [2024-11-21 02:40:11.439818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.903 [2024-11-21 02:40:11.443336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.443366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.443376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.447621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.447662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.451459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.451488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.451499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.454938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.454967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.454977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.459001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.459030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.459041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.462142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.462174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.462185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.465758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.465787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.465798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.469454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.469486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.469496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.473138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.473183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.473195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.477341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.477371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.477382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.481158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.481188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.481199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.484223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.484253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.484264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.488092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.488123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.488134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.491821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.491851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.491862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.495958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.495987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.495998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.499993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.500023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.503927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.503957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.503967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.507808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.507836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.507846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.510593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.510633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.514208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.514255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.514266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.517426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.517456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.517467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.521319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.521349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.521360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.524888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.524918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.524928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.528519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.528550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.528560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.531839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.531868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.531878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.535615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.535644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.535654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:30.904 [2024-11-21 02:40:11.539126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.904 [2024-11-21 02:40:11.539156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.904 [2024-11-21 02:40:11.539166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:30.905 [2024-11-21 02:40:11.543079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:30.905 [2024-11-21 02:40:11.543141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.905 [2024-11-21 02:40:11.543152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.547321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.547351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.547362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.551136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.551183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.551195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.555560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.555590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.555601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.559966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.559998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.560009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.563565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.563595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.563606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.567329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.567357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.567368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.570896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.570924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.570934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.574816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.574844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.574855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.166 [2024-11-21 02:40:11.578293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.166 [2024-11-21 02:40:11.578355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.166 [2024-11-21 02:40:11.578367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.582200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.582248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.582260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.585916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.585962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.585973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.589798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.589843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.589855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.593477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.593507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.593517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.596948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.596978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.596988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.601242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.601272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.601283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.604957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.604987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.604998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.608683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.608713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.608724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.612218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.612247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.612258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.615818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.615846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.615856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.619367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.619397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.619407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.623069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.623113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.623125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.626657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.626686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.626696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.630253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.630300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.630311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.633794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.633823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.633833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.636595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.636625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.636635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.640732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.640773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.640784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.643995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.644039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.644051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.647845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.647888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.647900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.652129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.652159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.652170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.655205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.655234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.167 [2024-11-21 02:40:11.655245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.167 [2024-11-21 02:40:11.658995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.167 [2024-11-21 02:40:11.659025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.659035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.662549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.662579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.662590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.665918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.665949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.665959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.669425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.669455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.669466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.673083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.673129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.673140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.676917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.676946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.676956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.680478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.680518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.683747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.683791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.683802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.687507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.687538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.687548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.691310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.691341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.691352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.694919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.694948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.694959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.698500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.698530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.698541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.702009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.702045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.702072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.705864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.705894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.705905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.709209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.709239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.709249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.713093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.713122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.713132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.717128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.717157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.717167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.720710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.720750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.720763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.724320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.724350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.724361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.727700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.727730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.727752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.731544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.731574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.731584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.735034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.735062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.168 [2024-11-21 02:40:11.735073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.168 [2024-11-21 02:40:11.739338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.168 [2024-11-21 02:40:11.739368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.739379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.743235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.743264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.743275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.747002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.747047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.747058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.750807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.750835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.750846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.754671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.754699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.754710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.759005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.759035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.759046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.763068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.763098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.763108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.766684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.766714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.766725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.770323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.770369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.770396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.773882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.773911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.773921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.777886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.777915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.777925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.782015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.782048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.782075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.786276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.786334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.789979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.790007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.790018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.793612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.793640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.793650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.797649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.797678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.797689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.801715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.801757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.801769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.169 [2024-11-21 02:40:11.804397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.169 [2024-11-21 02:40:11.804443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.169 [2024-11-21 02:40:11.804455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.808661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.808708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.808720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.813076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.813122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.813133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.816237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.816267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.816278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.820383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.820411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.820422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.824543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.824572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.824583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.827995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.828025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.828035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.831891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.831920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.831931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.835519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.835550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.835560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.838992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.839023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.839034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.842764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.842803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.431 [2024-11-21 02:40:11.842814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.431 [2024-11-21 02:40:11.845940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.431 [2024-11-21 02:40:11.845969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.845979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.849921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.849950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.853913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.853942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.853953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.857652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.857682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.857693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.860530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.860560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.864478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.864508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.864518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.868503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.868534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.868544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.872371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.872417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.872429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.876341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.876370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.876380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.880268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.880299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.880310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.883915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.883946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.883958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.887688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.887718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.887728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.891144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.891174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.891185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.894320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.898092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.898137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.898149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.901338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.901368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.901379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.905137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.905167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.905178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.908129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.908159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.908169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.911536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.911566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.911576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.915907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.915936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.919194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.919223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.919233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.922743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.922782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.922793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.926756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.926783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.926794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.930785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.930814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.930824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.934177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.934224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.934235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.937913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.937959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.937970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.941155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.941185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.432 [2024-11-21 02:40:11.941195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.432 [2024-11-21 02:40:11.945238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.432 [2024-11-21 02:40:11.945268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.945279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.949142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.949172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.949183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.952604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.952634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.952644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.956544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.956572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.956583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.960217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.960246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.960257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.964191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.964221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.964232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.968345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.968374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.968384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.971858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.971886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.971897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.975406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.975437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.975447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.979443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.979473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.979483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.982844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.982889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.982900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.986227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.986284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.986314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.990297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.990334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.990347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.993949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.993980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.993991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:11.997416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:11.997446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:11.997457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.001388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.001418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.001429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.005034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.005065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.005076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.008029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.008059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.008070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.012009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.012040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.012051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.015579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.015609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.015621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.019078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.019125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.019151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.023090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.023134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.023162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.027280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.027310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.027320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.030881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.030926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.030938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.034928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.034974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.034985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.038402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.038431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.038441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.042304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.042349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.433 [2024-11-21 02:40:12.042376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.433 [2024-11-21 02:40:12.046345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.433 [2024-11-21 02:40:12.046374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.046384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.050560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.050589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.050599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.054290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.054338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.054351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.058337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.058383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.058410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.061451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.061481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.061492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.065040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.065072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.065082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.069160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.069207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.069219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.434 [2024-11-21 02:40:12.073517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.434 [2024-11-21 02:40:12.073565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.434 [2024-11-21 02:40:12.073577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.077770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.077796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.077806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.082125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.082172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.082184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.085298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.085327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.085337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.088481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.088511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.088523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.091882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.091912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.091922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.095880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.095909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.095920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.099341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.099370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.099380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.103102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.103132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.103143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.106554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.106585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.106595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.110196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.110242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.110254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.113148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.113177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.113187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.117085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.117115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.117126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.120868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.120898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.124269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.124299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.124310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.127672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.127702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.131266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.131296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.698 [2024-11-21 02:40:12.131306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.698 [2024-11-21 02:40:12.135382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.698 [2024-11-21 02:40:12.135428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.138491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.138537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.138548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.142656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.142702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.142714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.147013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.147062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.147075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.151174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.151204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.151215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.155052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.155129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.155155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.159193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.159223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.159234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.162855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.162902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.166252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.166284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.166312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.170481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.170528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.170539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.174616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.174642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.174653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.177863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.177909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.177921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.182601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.182649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.182660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.186595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.186642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.186654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.190923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.190971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.190983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.194675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.194721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.194732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.198711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.198781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.198795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.202495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.202539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.202551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.206489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.206533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.206544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.209883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.209929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.209940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.213531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.213588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.217318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.217364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.217375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.221285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.221330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.221341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.224857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.224901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.224912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.228661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.228706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.228717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.232041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.232086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.232098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.235965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.699 [2024-11-21 02:40:12.236011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.699 [2024-11-21 02:40:12.236022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.699 [2024-11-21 02:40:12.239486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.239530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.239541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.243043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.243089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.243101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.246774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.246830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.249637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.249682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.249693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.253314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.253360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.253373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.256969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.257014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.257025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.260897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.260955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.264434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.264479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.264490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.267668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.267714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.267724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.271218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.271264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.271275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.275136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.275197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.275208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.279017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.279063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.279075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.283166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.283210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.283221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.287025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.287069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.287080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.291059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.291103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.291114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.295132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.295176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.295187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.298604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.298648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.298659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.302338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.302401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.305540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.305584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.305595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.309179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.309225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.309236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.312978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.313023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.313034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.316347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.316392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.316403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.319793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.319837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.319847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.323329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.323374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.323385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.326837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.326880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.326890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.330863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.330908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.330919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:31.700 [2024-11-21 02:40:12.334516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:31.700 [2024-11-21 02:40:12.334564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.700 [2024-11-21 02:40:12.334577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.338818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.338866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.338879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.343056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.343092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.343105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.347607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.347643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.347655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.351935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.351968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.351980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.356197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.356231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.356244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.360037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.360069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.360082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.363935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.363969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.367633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.367666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.367678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.371415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.371446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.371457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.374565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.374596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.374607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.377673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.377704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.377716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.381572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.381603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.381614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.385262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.385307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.385318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.388852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.388894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.388905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.393187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.393233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.393244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.397321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.397366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.401098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.012 [2024-11-21 02:40:12.401143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.012 [2024-11-21 02:40:12.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.012 [2024-11-21 02:40:12.404457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.404503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.404514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.407509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.407555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.407566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.411855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.411900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.411912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.415726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.415781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.415792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.419525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.419555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.419565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.423284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.423314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.423324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.426612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.426642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.426652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.429959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.430005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.430016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.433325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.433354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.433364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.436928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.436958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.436968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.440134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.440163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.440174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.443852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.443882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.443892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.447671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.447701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.447711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.451611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.451639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.451650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.455449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.455488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.459267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.459295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.459306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.462836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.462864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.462874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.466513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.466542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.466553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.470065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.470109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.470120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.473078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.473107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.473117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.477060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.477090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.477100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.013 [2024-11-21 02:40:12.480222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.013 [2024-11-21 02:40:12.480252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.013 [2024-11-21 02:40:12.480262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.483292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.483323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.483334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.487189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.487220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.487230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.490820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.490850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.490861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.494457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.494498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.497893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.497923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.497934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.501315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.501344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.501354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.505142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.505171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.505182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.508679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.508708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.508719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.512211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.512241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.512251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.516272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.516302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.516314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.519787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.519831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.519843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.523601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.523629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.523639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.527888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.527935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.527946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.531647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.531692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.531703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.535005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.535034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.535044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.538704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.538732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.538754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.542150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.542196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.542208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.545572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.545601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.545611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.549253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.549283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.549294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.552866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.552897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.552907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.014 [2024-11-21 02:40:12.556407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.014 [2024-11-21 02:40:12.556436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.014 [2024-11-21 02:40:12.556448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.559795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.559840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.559850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.563113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.563143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.563153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.566777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.566816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.566829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.570351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.570414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.570425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.574003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.574033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.574071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.577629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.577658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.577668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.580243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.580288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.580299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.583901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.583949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.583960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.587586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.587631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.587642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.591470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.591516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.591528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.595047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.595076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.598673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.598702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.598713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.602465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.602494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.602505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.606291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.606338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.606350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.610155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.610208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.610220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.614399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.614432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.614459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.618533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.618580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.618591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.015 [2024-11-21 02:40:12.622499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.015 [2024-11-21 02:40:12.622530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.015 [2024-11-21 02:40:12.622542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.307 [2024-11-21 02:40:12.626752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.307 [2024-11-21 02:40:12.626798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.307 [2024-11-21 02:40:12.626812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.307 [2024-11-21 02:40:12.631078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.307 [2024-11-21 02:40:12.631143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.307 [2024-11-21 02:40:12.631155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.307 [2024-11-21 02:40:12.635581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.307 [2024-11-21 02:40:12.635630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.307 [2024-11-21 02:40:12.635643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.639468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.639520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.639531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.643462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.643491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.643501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.647577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.647606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.647618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.651110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.651140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.651150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.653964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.653993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.654003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.657837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.657865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.661521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.661552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.661563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.665549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.665596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.665608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.669457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.669490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.669503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.673296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.673344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.673356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.677313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.677359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.677370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.681269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.681300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.681310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.684823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.684869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.684880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.688643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.688673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.688683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.692097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.692127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.692137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.695968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.695998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.696009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.699784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.699813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.699823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.703615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.703646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.703657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.707185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.707215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.308 [2024-11-21 02:40:12.707226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.308 [2024-11-21 02:40:12.710825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.308 [2024-11-21 02:40:12.710863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.710874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.714657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.714686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.714696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.718177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.718209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.718221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.721951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.721980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.721989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.726552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.726582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.726592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.730740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.730793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.730805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.733978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.734035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.737121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.737167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.737178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.740882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.740927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.740938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.744455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.744484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.744495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.748430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.748458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.748468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.751789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.751816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.751827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.755926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.755954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.755964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.759673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.759718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.759744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.763207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.763235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.763245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.766987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.767016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.767026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.770833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.770860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.770870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.774121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.774166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.774178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.778399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.778461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.778488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.782555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.782584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.782594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.785899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.309 [2024-11-21 02:40:12.785929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.309 [2024-11-21 02:40:12.785939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.309 [2024-11-21 02:40:12.789418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.789448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.789459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.793295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.793325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.793336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.796385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.796415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.796425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.800274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.800305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.800316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.803619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.803659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.807328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.807372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.807384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.810503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.810548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.810560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.813985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.814029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.814064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.818026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.818093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.818119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.822514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.822559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.822570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.826448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.826495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.826520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.829994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.830065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.830093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.833271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.833300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.833311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.836533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.836564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.836574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.840370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.840399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.840409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.844278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.844308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.844319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.847882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.847912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.847923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.850914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.850944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.850954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.854791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.854820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.854831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.858851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.858881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.858891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.862093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.310 [2024-11-21 02:40:12.862138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.310 [2024-11-21 02:40:12.862149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.310 [2024-11-21 02:40:12.865708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.865749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.865762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.869033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.869062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.869072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.872641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.872670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.872680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.875965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.875994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.876004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.879526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.879556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.883216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.883261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.883272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.886905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.886950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.886962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.891111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.891156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.891167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.895287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.895332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.895343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.899613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.899657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.899668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.903689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.903733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.903744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.907558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.907586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.907597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.911644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.911675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.911686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.915573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.915604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.915615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.919208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.919238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.919249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.923032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.923077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.923088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.925688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.925717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.925727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.929435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.929465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.929475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.933270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.933299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.933310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.937253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.937281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.937291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.941329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.941360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.941370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.311 [2024-11-21 02:40:12.945899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.311 [2024-11-21 02:40:12.945928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.311 [2024-11-21 02:40:12.945939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.950713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.950769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.573 [2024-11-21 02:40:12.950783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.954683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.954712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.573 [2024-11-21 02:40:12.954723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.958567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.958598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.573 [2024-11-21 02:40:12.958609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.962626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.962656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.573 [2024-11-21 02:40:12.962667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.966490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.966520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.573 [2024-11-21 02:40:12.966531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.970308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.970354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.573 [2024-11-21 02:40:12.970380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.573 [2024-11-21 02:40:12.973068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.573 [2024-11-21 02:40:12.973098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.973108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:12.977296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:12.977325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.977335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:12.980955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:12.980984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.980995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:12.985440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:12.985469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.985480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:12.988499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:12.988544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.988556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:12.992198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:12.992228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.992239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:12.996295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:12.996326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:12.996337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.000032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.000077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.000089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.003795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.003840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.003852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.006957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.006986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.006997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.009904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.009933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.009959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.013681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.013710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.013720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.017183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.017213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.017224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.020611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.020640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.020650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.023983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.024028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.024038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.027879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.027923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.027934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.031331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.031360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.031371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.035115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.035144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.038624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.038653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.038664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.042748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.042785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.042796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.046191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.046223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.574 [2024-11-21 02:40:13.046235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.574 [2024-11-21 02:40:13.050056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.574 [2024-11-21 02:40:13.050115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.050126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.053721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.053758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.053769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.057451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.057481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.057491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.060852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.060881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.060891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.064359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.064388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.064399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.067830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.067875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.067886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.071565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.071595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.071605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.074969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.075010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.078779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.078806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.078817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.082154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.082185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.082196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.085491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.085521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.085531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.089227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.089257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.089268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.092967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.092996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.093006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.096453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.096482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.096493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.099765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.099809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.099820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.103760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.103803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.103814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.107340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.107368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.107379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.111015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.111044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.111054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.114568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.114597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.114608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.118213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.118247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.118260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.122336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.122382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.122408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.575 [2024-11-21 02:40:13.126133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.575 [2024-11-21 02:40:13.126179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.575 [2024-11-21 02:40:13.126191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.129878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.129906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.129916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.133752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.133780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.133790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.137206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.137234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.137244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.140838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.140868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.144577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.144623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.144635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.147492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.147537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.147548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.150965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.151011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.151022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.155159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.155188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.155199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.159533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.159566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.159578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.163253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.163298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.167482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.167531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.167543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.171566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.171595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.171605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.175186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.175215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.175226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.179247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.179277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.179288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.183187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.183216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.183226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.187062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.187092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.187103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.191254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.191285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.191295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.194940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.194969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.194979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.198716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.198768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.202437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.202465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.202475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.576 [2024-11-21 02:40:13.204991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23dc7e0) 00:23:32.576 [2024-11-21 02:40:13.205036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.576 [2024-11-21 02:40:13.205048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.576 00:23:32.576 Latency(us) 00:23:32.576 [2024-11-21T02:40:13.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.576 [2024-11-21T02:40:13.223Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:32.576 nvme0n1 : 2.00 8303.29 1037.91 0.00 0.00 1923.67 573.44 4915.20 00:23:32.576 [2024-11-21T02:40:13.224Z] =================================================================================================================== 00:23:32.577 [2024-11-21T02:40:13.224Z] Total : 8303.29 1037.91 0.00 0.00 1923.67 573.44 4915.20 00:23:32.577 0 00:23:32.835 02:40:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:32.835 02:40:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:32.835 02:40:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:32.835 02:40:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:32.835 | .driver_specific 00:23:32.835 | .nvme_error 00:23:32.835 | .status_code 00:23:32.835 | .command_transient_transport_error' 00:23:33.095 02:40:13 -- host/digest.sh@71 -- # (( 536 > 0 )) 00:23:33.095 02:40:13 -- host/digest.sh@73 -- # killprocess 87279 00:23:33.095 02:40:13 -- common/autotest_common.sh@936 -- # '[' -z 87279 ']' 00:23:33.095 02:40:13 -- common/autotest_common.sh@940 -- # kill -0 87279 00:23:33.095 02:40:13 -- common/autotest_common.sh@941 -- # uname 00:23:33.095 02:40:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.095 02:40:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87279 00:23:33.095 02:40:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:33.095 02:40:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:33.095 killing process with pid 87279 00:23:33.095 02:40:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87279' 00:23:33.095 Received shutdown signal, test time was about 2.000000 seconds 00:23:33.095 00:23:33.095 Latency(us) 00:23:33.095 [2024-11-21T02:40:13.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.095 [2024-11-21T02:40:13.742Z] =================================================================================================================== 00:23:33.095 [2024-11-21T02:40:13.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.095 02:40:13 -- common/autotest_common.sh@955 -- # kill 87279 00:23:33.095 02:40:13 -- common/autotest_common.sh@960 -- # wait 87279 00:23:33.354 02:40:13 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:33.354 02:40:13 -- host/digest.sh@54 -- # local rw bs qd 00:23:33.354 02:40:13 -- host/digest.sh@56 -- # rw=randwrite 00:23:33.354 02:40:13 -- host/digest.sh@56 -- # bs=4096 00:23:33.354 02:40:13 -- host/digest.sh@56 -- # qd=128 00:23:33.354 02:40:13 -- host/digest.sh@58 -- # bperfpid=87369 00:23:33.354 02:40:13 -- host/digest.sh@60 -- # waitforlisten 87369 /var/tmp/bperf.sock 00:23:33.354 02:40:13 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:33.354 02:40:13 -- common/autotest_common.sh@829 -- # '[' -z 87369 ']' 00:23:33.354 02:40:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:33.354 02:40:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:33.354 02:40:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:33.354 02:40:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.354 02:40:13 -- common/autotest_common.sh@10 -- # set +x 00:23:33.354 [2024-11-21 02:40:13.821209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:33.354 [2024-11-21 02:40:13.821327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87369 ] 00:23:33.354 [2024-11-21 02:40:13.952643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.614 [2024-11-21 02:40:14.035320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.182 02:40:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.182 02:40:14 -- common/autotest_common.sh@862 -- # return 0 00:23:34.182 02:40:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:34.182 02:40:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:34.441 02:40:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:34.441 02:40:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.441 02:40:14 -- common/autotest_common.sh@10 -- # set +x 00:23:34.441 02:40:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.441 02:40:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.441 02:40:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:34.700 nvme0n1 00:23:34.959 02:40:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:34.959 02:40:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.959 02:40:15 -- common/autotest_common.sh@10 -- # set +x 00:23:34.959 02:40:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.959 02:40:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:34.959 02:40:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.960 Running I/O for 2 seconds... 00:23:34.960 [2024-11-21 02:40:15.464310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:34.960 [2024-11-21 02:40:15.464525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.464552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.473201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ea248 00:23:34.960 [2024-11-21 02:40:15.473827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.473895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.482091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df118 00:23:34.960 [2024-11-21 02:40:15.482485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.482522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.490920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e88f8 00:23:34.960 [2024-11-21 02:40:15.491250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.491282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.499697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ebfd0 00:23:34.960 [2024-11-21 02:40:15.500025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.500068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.508561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f35f0 00:23:34.960 [2024-11-21 02:40:15.508820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.508884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.517345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4578 00:23:34.960 [2024-11-21 02:40:15.517569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.517589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.526373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef6a8 00:23:34.960 [2024-11-21 02:40:15.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.527219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.534980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eaef0 00:23:34.960 [2024-11-21 02:40:15.536022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.536053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.543750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef6a8 00:23:34.960 [2024-11-21 02:40:15.544652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.544700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.552570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e8088 00:23:34.960 [2024-11-21 02:40:15.553504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.553536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.561696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eee38 00:23:34.960 [2024-11-21 02:40:15.562827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.562875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.571483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0350 00:23:34.960 [2024-11-21 02:40:15.572256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.572305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.579714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190edd58 00:23:34.960 [2024-11-21 02:40:15.580371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.580420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.588686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6fa8 00:23:34.960 [2024-11-21 02:40:15.589056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.589092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:34.960 [2024-11-21 02:40:15.597457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e27f0 00:23:34.960 [2024-11-21 02:40:15.597801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:34.960 [2024-11-21 02:40:15.597833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:35.220 [2024-11-21 02:40:15.606269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e9e10 00:23:35.220 [2024-11-21 02:40:15.607231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.220 [2024-11-21 02:40:15.607295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:35.220 [2024-11-21 02:40:15.614933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4140 00:23:35.220 [2024-11-21 02:40:15.615883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.220 [2024-11-21 02:40:15.615912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:35.220 [2024-11-21 02:40:15.624097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f31b8 00:23:35.220 [2024-11-21 02:40:15.624338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.220 [2024-11-21 02:40:15.624374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:35.220 [2024-11-21 02:40:15.633023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e1b48 00:23:35.220 [2024-11-21 02:40:15.633447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.220 [2024-11-21 02:40:15.633482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:35.220 [2024-11-21 02:40:15.641801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f96f8 00:23:35.221 [2024-11-21 02:40:15.642220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.642255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.650584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f92c0 00:23:35.221 [2024-11-21 02:40:15.650975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.659365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef270 00:23:35.221 [2024-11-21 02:40:15.659708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.659753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.668130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ec840 00:23:35.221 [2024-11-21 02:40:15.668495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.668530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.676901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6fa8 00:23:35.221 [2024-11-21 02:40:15.677233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.677275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.688051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed0b0 00:23:35.221 [2024-11-21 02:40:15.689032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.689063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.695223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4140 00:23:35.221 [2024-11-21 02:40:15.695704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.695747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.703749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:35.221 [2024-11-21 02:40:15.704668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.704699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.712325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:35.221 [2024-11-21 02:40:15.713352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.713383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.723179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df550 00:23:35.221 [2024-11-21 02:40:15.724050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.724080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.729794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e84c0 00:23:35.221 [2024-11-21 02:40:15.729929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.729948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.740622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e95a0 00:23:35.221 [2024-11-21 02:40:15.741160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.741196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.749575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f96f8 00:23:35.221 [2024-11-21 02:40:15.750108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.750143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.758638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5ec8 00:23:35.221 [2024-11-21 02:40:15.759341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.759388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.767490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e9168 00:23:35.221 [2024-11-21 02:40:15.768165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.768213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.776382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed0b0 00:23:35.221 [2024-11-21 02:40:15.777034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.777081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.785149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0350 00:23:35.221 [2024-11-21 02:40:15.785748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.785790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.793905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6fa8 00:23:35.221 [2024-11-21 02:40:15.794502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.794550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.802652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ecc78 00:23:35.221 [2024-11-21 02:40:15.803309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.803355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.810869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fb480 00:23:35.221 [2024-11-21 02:40:15.811271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.811305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.820521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eee38 00:23:35.221 [2024-11-21 02:40:15.821072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.821104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.829437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e0630 00:23:35.221 [2024-11-21 02:40:15.830206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.830256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.838238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de038 00:23:35.221 [2024-11-21 02:40:15.838956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.839003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.847038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e1710 00:23:35.221 [2024-11-21 02:40:15.847711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.847769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.855838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:35.221 [2024-11-21 02:40:15.856460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.221 [2024-11-21 02:40:15.856523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:35.221 [2024-11-21 02:40:15.863943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4f40 00:23:35.481 [2024-11-21 02:40:15.865134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.481 [2024-11-21 02:40:15.865197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:35.481 [2024-11-21 02:40:15.873526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed920 00:23:35.481 [2024-11-21 02:40:15.874874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.481 [2024-11-21 02:40:15.874922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:35.481 [2024-11-21 02:40:15.883255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e01f8 00:23:35.481 [2024-11-21 02:40:15.884091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.481 [2024-11-21 02:40:15.884121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:35.481 [2024-11-21 02:40:15.889912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6b70 00:23:35.481 [2024-11-21 02:40:15.890006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.890026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.899516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ea680 00:23:35.482 [2024-11-21 02:40:15.899939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.899975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.909048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fe720 00:23:35.482 [2024-11-21 02:40:15.910660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.910711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.918371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1430 00:23:35.482 [2024-11-21 02:40:15.918913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.918945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.927562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:35.482 [2024-11-21 02:40:15.928572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.928601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.936190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df118 00:23:35.482 [2024-11-21 02:40:15.937669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.937720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.945238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df118 00:23:35.482 [2024-11-21 02:40:15.946261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.946291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.953805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e73e0 00:23:35.482 [2024-11-21 02:40:15.954835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.954864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.962649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4f40 00:23:35.482 [2024-11-21 02:40:15.963633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.963664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.971657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef270 00:23:35.482 [2024-11-21 02:40:15.972110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.972143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.980983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fd640 00:23:35.482 [2024-11-21 02:40:15.981514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.981546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.989701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3498 00:23:35.482 [2024-11-21 02:40:15.990895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.990926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:15.998850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f8e88 00:23:35.482 [2024-11-21 02:40:15.999716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:15.999769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.007941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de038 00:23:35.482 [2024-11-21 02:40:16.009209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.009240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.017038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0ff8 00:23:35.482 [2024-11-21 02:40:16.017470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.017502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.025956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df988 00:23:35.482 [2024-11-21 02:40:16.026601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.026632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.033520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fc998 00:23:35.482 [2024-11-21 02:40:16.033614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.033650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.042848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f57b0 00:23:35.482 [2024-11-21 02:40:16.043022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.043049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.052378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:35.482 [2024-11-21 02:40:16.053755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.053819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.061533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190efae0 00:23:35.482 [2024-11-21 02:40:16.061892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.061918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.070538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6458 00:23:35.482 [2024-11-21 02:40:16.071029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.071071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.079292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de470 00:23:35.482 [2024-11-21 02:40:16.079795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.079830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.088065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1430 00:23:35.482 [2024-11-21 02:40:16.088494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.088521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.096850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0788 00:23:35.482 [2024-11-21 02:40:16.097233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.097273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.105909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f3e60 00:23:35.482 [2024-11-21 02:40:16.106522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.106553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.114760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed4e8 00:23:35.482 [2024-11-21 02:40:16.115277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.482 [2024-11-21 02:40:16.115307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:35.482 [2024-11-21 02:40:16.123642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de470 00:23:35.483 [2024-11-21 02:40:16.124618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.483 [2024-11-21 02:40:16.124663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:35.742 [2024-11-21 02:40:16.132464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190edd58 00:23:35.743 [2024-11-21 02:40:16.133622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.133666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.143722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df118 00:23:35.743 [2024-11-21 02:40:16.144750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.144801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.150413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:35.743 [2024-11-21 02:40:16.150697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.150721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.160500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ee190 00:23:35.743 [2024-11-21 02:40:16.161275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.161309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.169220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1ca0 00:23:35.743 [2024-11-21 02:40:16.170448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.170508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.179087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:35.743 [2024-11-21 02:40:16.179739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.179775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.186883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f96f8 00:23:35.743 [2024-11-21 02:40:16.187715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.187769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.195236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f9b30 00:23:35.743 [2024-11-21 02:40:16.196178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.196222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.206165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e7c50 00:23:35.743 [2024-11-21 02:40:16.206817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.206854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.213979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5ec8 00:23:35.743 [2024-11-21 02:40:16.215311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.215356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.222651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f2510 00:23:35.743 [2024-11-21 02:40:16.223855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.223900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.232789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ddc00 00:23:35.743 [2024-11-21 02:40:16.234154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.234185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.240820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fcdd0 00:23:35.743 [2024-11-21 02:40:16.241941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.241984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.249556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f3a28 00:23:35.743 [2024-11-21 02:40:16.250960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.251004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.258022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0ff8 00:23:35.743 [2024-11-21 02:40:16.258947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.258991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.267220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ee5c8 00:23:35.743 [2024-11-21 02:40:16.267459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.267499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.276130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6fa8 00:23:35.743 [2024-11-21 02:40:16.276784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.276813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.284902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6b70 00:23:35.743 [2024-11-21 02:40:16.285296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.285319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.293690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fcdd0 00:23:35.743 [2024-11-21 02:40:16.294071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.294110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.302469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed0b0 00:23:35.743 [2024-11-21 02:40:16.302795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.302818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.311227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6cc8 00:23:35.743 [2024-11-21 02:40:16.311519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.311543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.320022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1868 00:23:35.743 [2024-11-21 02:40:16.320289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.320313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.328796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e1b48 00:23:35.743 [2024-11-21 02:40:16.329045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.329064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.337566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1868 00:23:35.743 [2024-11-21 02:40:16.337812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.337830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.348749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:35.743 [2024-11-21 02:40:16.349673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.349718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.355312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190dfdc0 00:23:35.743 [2024-11-21 02:40:16.355504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.743 [2024-11-21 02:40:16.355524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:35.743 [2024-11-21 02:40:16.366201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e38d0 00:23:35.744 [2024-11-21 02:40:16.366808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.744 [2024-11-21 02:40:16.366838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:35.744 [2024-11-21 02:40:16.373840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fac10 00:23:35.744 [2024-11-21 02:40:16.374707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.744 [2024-11-21 02:40:16.374764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:35.744 [2024-11-21 02:40:16.382535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190edd58 00:23:35.744 [2024-11-21 02:40:16.384055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.744 [2024-11-21 02:40:16.384100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.394020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e9168 00:23:36.003 [2024-11-21 02:40:16.395084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.395111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.400678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190efae0 00:23:36.003 [2024-11-21 02:40:16.400998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.401022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.410539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4f40 00:23:36.003 [2024-11-21 02:40:16.411220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.411251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.417952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fcdd0 00:23:36.003 [2024-11-21 02:40:16.418931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.418990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.428490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1868 00:23:36.003 [2024-11-21 02:40:16.428839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.428863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.437380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5658 00:23:36.003 [2024-11-21 02:40:16.437882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.437907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.446171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6890 00:23:36.003 [2024-11-21 02:40:16.446636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.446664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.454906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e0630 00:23:36.003 [2024-11-21 02:40:16.455350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.455376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.463673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e38d0 00:23:36.003 [2024-11-21 02:40:16.464069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.464125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.472434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f57b0 00:23:36.003 [2024-11-21 02:40:16.472804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.481186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e95a0 00:23:36.003 [2024-11-21 02:40:16.481561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.003 [2024-11-21 02:40:16.481586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.003 [2024-11-21 02:40:16.489926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f57b0 00:23:36.004 [2024-11-21 02:40:16.490334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.490358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.498963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e99d8 00:23:36.004 [2024-11-21 02:40:16.499354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.499380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.507781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef6a8 00:23:36.004 [2024-11-21 02:40:16.508751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.508818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.516736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed920 00:23:36.004 [2024-11-21 02:40:16.517160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.517200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.526376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e0a68 00:23:36.004 [2024-11-21 02:40:16.527312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.527339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.535383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e6300 00:23:36.004 [2024-11-21 02:40:16.536796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.536839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.544128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4140 00:23:36.004 [2024-11-21 02:40:16.545385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.545413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.553869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fe720 00:23:36.004 [2024-11-21 02:40:16.554771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.554824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.561673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e0ea0 00:23:36.004 [2024-11-21 02:40:16.563195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.563223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.570337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e7818 00:23:36.004 [2024-11-21 02:40:16.571663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.571691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.579277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3498 00:23:36.004 [2024-11-21 02:40:16.580571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.580598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.587918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de470 00:23:36.004 [2024-11-21 02:40:16.588740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.588793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.597040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fef90 00:23:36.004 [2024-11-21 02:40:16.597547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.597572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.605664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3d08 00:23:36.004 [2024-11-21 02:40:16.606623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.606668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.616567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ee190 00:23:36.004 [2024-11-21 02:40:16.617552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.617595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.622938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1430 00:23:36.004 [2024-11-21 02:40:16.623068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.623087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.631812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed920 00:23:36.004 [2024-11-21 02:40:16.632093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.632128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.004 [2024-11-21 02:40:16.640550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f35f0 00:23:36.004 [2024-11-21 02:40:16.640802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.004 [2024-11-21 02:40:16.640822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.649583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e95a0 00:23:36.264 [2024-11-21 02:40:16.649908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.649927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.657698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5220 00:23:36.264 [2024-11-21 02:40:16.657788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.657808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.668843] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f3a28 00:23:36.264 [2024-11-21 02:40:16.669427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.669456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.676656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3d08 00:23:36.264 [2024-11-21 02:40:16.677807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.677850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.685807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0788 00:23:36.264 [2024-11-21 02:40:16.686990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.687018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.694556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4de8 00:23:36.264 [2024-11-21 02:40:16.695551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.703360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ff3c8 00:23:36.264 [2024-11-21 02:40:16.704365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.704393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.712463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fb048 00:23:36.264 [2024-11-21 02:40:16.712790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.712810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.721390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3498 00:23:36.264 [2024-11-21 02:40:16.722170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.722230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.729703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e01f8 00:23:36.264 [2024-11-21 02:40:16.730242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.730272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.738317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f7da8 00:23:36.264 [2024-11-21 02:40:16.739364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.739393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.747296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f3e60 00:23:36.264 [2024-11-21 02:40:16.748109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.748139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.756493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190dece0 00:23:36.264 [2024-11-21 02:40:16.756685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.756704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.765794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6cc8 00:23:36.264 [2024-11-21 02:40:16.766832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.766859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.774897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df988 00:23:36.264 [2024-11-21 02:40:16.776083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.776111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.783630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef6a8 00:23:36.264 [2024-11-21 02:40:16.784910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.784937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.264 [2024-11-21 02:40:16.792813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4f40 00:23:36.264 [2024-11-21 02:40:16.793218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.264 [2024-11-21 02:40:16.793242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.801716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f9b30 00:23:36.265 [2024-11-21 02:40:16.802603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.802647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.809789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ed920 00:23:36.265 [2024-11-21 02:40:16.810816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.810859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.818621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5220 00:23:36.265 [2024-11-21 02:40:16.819227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.819258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.829142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ef6a8 00:23:36.265 [2024-11-21 02:40:16.830291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.830318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.836532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f0ff8 00:23:36.265 [2024-11-21 02:40:16.837485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.837528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.844848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5ec8 00:23:36.265 [2024-11-21 02:40:16.845016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.845035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.853709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df988 00:23:36.265 [2024-11-21 02:40:16.854310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.854341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.862475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e73e0 00:23:36.265 [2024-11-21 02:40:16.862798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.862823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.871265] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6cc8 00:23:36.265 [2024-11-21 02:40:16.871531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.871555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.880007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df550 00:23:36.265 [2024-11-21 02:40:16.880265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.880288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.888767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e7818 00:23:36.265 [2024-11-21 02:40:16.889018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.889043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.897814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1430 00:23:36.265 [2024-11-21 02:40:16.898444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.898475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.265 [2024-11-21 02:40:16.906670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df118 00:23:36.265 [2024-11-21 02:40:16.907333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.265 [2024-11-21 02:40:16.907363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.917074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fc998 00:23:36.525 [2024-11-21 02:40:16.918464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.918491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.925373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fbcf0 00:23:36.525 [2024-11-21 02:40:16.926633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.926661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.934220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ea680 00:23:36.525 [2024-11-21 02:40:16.935148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.942559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6458 00:23:36.525 [2024-11-21 02:40:16.943740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.943777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.951758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4b08 00:23:36.525 [2024-11-21 02:40:16.952150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.952205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.960648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f92c0 00:23:36.525 [2024-11-21 02:40:16.961210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.961240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.969410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6890 00:23:36.525 [2024-11-21 02:40:16.969963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.969994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.978393] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3498 00:23:36.525 [2024-11-21 02:40:16.978942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.978964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.987316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fc998 00:23:36.525 [2024-11-21 02:40:16.987813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.987837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:16.996264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:36.525 [2024-11-21 02:40:16.996709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:16.996758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.525 [2024-11-21 02:40:17.005172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ee5c8 00:23:36.525 [2024-11-21 02:40:17.005577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.525 [2024-11-21 02:40:17.005600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.013965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e84c0 00:23:36.526 [2024-11-21 02:40:17.014529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.014559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.021815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fd208 00:23:36.526 [2024-11-21 02:40:17.021941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.021960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.031649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e1b48 00:23:36.526 [2024-11-21 02:40:17.032338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.032369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.040447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4578 00:23:36.526 [2024-11-21 02:40:17.041499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.041527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.049362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3d08 00:23:36.526 [2024-11-21 02:40:17.049640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.049674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.058264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f46d0 00:23:36.526 [2024-11-21 02:40:17.058689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.058729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.067341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e7818 00:23:36.526 [2024-11-21 02:40:17.067718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.067751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.076144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f9b30 00:23:36.526 [2024-11-21 02:40:17.076500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.076523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.084930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e3d08 00:23:36.526 [2024-11-21 02:40:17.085360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.085401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.093751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e73e0 00:23:36.526 [2024-11-21 02:40:17.094225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.094250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.102558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f7970 00:23:36.526 [2024-11-21 02:40:17.103032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.103067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.111373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:36.526 [2024-11-21 02:40:17.111880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.111903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.120201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5658 00:23:36.526 [2024-11-21 02:40:17.120817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.120846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.129021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fac10 00:23:36.526 [2024-11-21 02:40:17.129634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.129663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.137831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4f40 00:23:36.526 [2024-11-21 02:40:17.138250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.138275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.146706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f5be8 00:23:36.526 [2024-11-21 02:40:17.147161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.147202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.155553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6cc8 00:23:36.526 [2024-11-21 02:40:17.156091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.156113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:36.526 [2024-11-21 02:40:17.166078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f2d80 00:23:36.526 [2024-11-21 02:40:17.167309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.526 [2024-11-21 02:40:17.167354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.174227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ddc00 00:23:36.786 [2024-11-21 02:40:17.175708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.175779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.185056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eea00 00:23:36.786 [2024-11-21 02:40:17.185866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.185896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.192041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e9e10 00:23:36.786 [2024-11-21 02:40:17.192107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.192142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.203607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ecc78 00:23:36.786 [2024-11-21 02:40:17.204198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.204229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.212256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e9168 00:23:36.786 [2024-11-21 02:40:17.213539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.213582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.221688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ec408 00:23:36.786 [2024-11-21 02:40:17.222084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.222118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.231482] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5ec8 00:23:36.786 [2024-11-21 02:40:17.232429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.232474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.240183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:36.786 [2024-11-21 02:40:17.241675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.241722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.251640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e8088 00:23:36.786 [2024-11-21 02:40:17.252665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.252711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.258197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f4b08 00:23:36.786 [2024-11-21 02:40:17.258365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.258385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.267664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f8a50 00:23:36.786 [2024-11-21 02:40:17.269005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.269049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.276302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f7da8 00:23:36.786 [2024-11-21 02:40:17.277293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.277336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.287928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f2d80 00:23:36.786 [2024-11-21 02:40:17.288831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.288881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.295899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ea248 00:23:36.786 [2024-11-21 02:40:17.297381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.297426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.304597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f6458 00:23:36.786 [2024-11-21 02:40:17.305723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.305776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.313710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ec408 00:23:36.786 [2024-11-21 02:40:17.314227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.314257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.322580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ebfd0 00:23:36.786 [2024-11-21 02:40:17.323260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.323290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.330106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190f1430 00:23:36.786 [2024-11-21 02:40:17.330241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.786 [2024-11-21 02:40:17.330260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:36.786 [2024-11-21 02:40:17.339253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e1b48 00:23:36.786 [2024-11-21 02:40:17.339799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.339832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.350471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fa7d8 00:23:36.787 [2024-11-21 02:40:17.351442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.351486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.357083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190df988 00:23:36.787 [2024-11-21 02:40:17.357320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.357340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.366905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e4140 00:23:36.787 [2024-11-21 02:40:17.367462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.367492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.375932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fd640 00:23:36.787 [2024-11-21 02:40:17.377014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.377057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.384322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190e5658 00:23:36.787 [2024-11-21 02:40:17.384882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.384913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.392975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190eb760 00:23:36.787 [2024-11-21 02:40:17.393858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.393885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.403792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ebfd0 00:23:36.787 [2024-11-21 02:40:17.404519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.404547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.411659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190de8a8 00:23:36.787 [2024-11-21 02:40:17.413006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.413035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.420675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fbcf0 00:23:36.787 [2024-11-21 02:40:17.421050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:36.787 [2024-11-21 02:40:17.421075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:36.787 [2024-11-21 02:40:17.429797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fc998 00:23:37.045 [2024-11-21 02:40:17.430494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.045 [2024-11-21 02:40:17.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:37.045 [2024-11-21 02:40:17.438731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190fc128 00:23:37.045 [2024-11-21 02:40:17.439255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.045 [2024-11-21 02:40:17.439285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:37.045 [2024-11-21 02:40:17.447552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb768f0) with pdu=0x2000190ddc00 00:23:37.045 [2024-11-21 02:40:17.448076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.045 [2024-11-21 02:40:17.448103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:37.045 00:23:37.045 Latency(us) 00:23:37.045 [2024-11-21T02:40:17.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.045 [2024-11-21T02:40:17.692Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:37.045 nvme0n1 : 2.00 28452.11 111.14 0.00 0.00 4493.99 1876.71 12809.31 00:23:37.045 [2024-11-21T02:40:17.692Z] =================================================================================================================== 00:23:37.045 [2024-11-21T02:40:17.692Z] Total : 28452.11 111.14 0.00 0.00 4493.99 1876.71 12809.31 00:23:37.045 0 00:23:37.045 02:40:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:37.045 02:40:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:37.045 02:40:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:37.045 | .driver_specific 00:23:37.045 | .nvme_error 00:23:37.045 | .status_code 00:23:37.045 | .command_transient_transport_error' 00:23:37.045 02:40:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:37.303 02:40:17 -- host/digest.sh@71 -- # (( 223 > 0 )) 00:23:37.303 02:40:17 -- host/digest.sh@73 -- # killprocess 87369 00:23:37.304 02:40:17 -- common/autotest_common.sh@936 -- # '[' -z 87369 ']' 00:23:37.304 02:40:17 -- common/autotest_common.sh@940 -- # kill -0 87369 00:23:37.304 02:40:17 -- common/autotest_common.sh@941 -- # uname 00:23:37.304 02:40:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.304 02:40:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87369 00:23:37.304 02:40:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:37.304 02:40:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:37.304 killing process with pid 87369 00:23:37.304 02:40:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87369' 00:23:37.304 02:40:17 -- common/autotest_common.sh@955 -- # kill 87369 00:23:37.304 Received shutdown signal, test time was about 2.000000 seconds 00:23:37.304 00:23:37.304 Latency(us) 00:23:37.304 [2024-11-21T02:40:17.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.304 [2024-11-21T02:40:17.951Z] =================================================================================================================== 00:23:37.304 [2024-11-21T02:40:17.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.304 02:40:17 -- common/autotest_common.sh@960 -- # wait 87369 00:23:37.562 02:40:18 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:37.562 02:40:18 -- host/digest.sh@54 -- # local rw bs qd 00:23:37.562 02:40:18 -- host/digest.sh@56 -- # rw=randwrite 00:23:37.562 02:40:18 -- host/digest.sh@56 -- # bs=131072 00:23:37.562 02:40:18 -- host/digest.sh@56 -- # qd=16 00:23:37.562 02:40:18 -- host/digest.sh@58 -- # bperfpid=87454 00:23:37.562 02:40:18 -- host/digest.sh@60 -- # waitforlisten 87454 /var/tmp/bperf.sock 00:23:37.562 02:40:18 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:37.562 02:40:18 -- common/autotest_common.sh@829 -- # '[' -z 87454 ']' 00:23:37.562 02:40:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:37.562 02:40:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:37.562 02:40:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:37.562 02:40:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.562 02:40:18 -- common/autotest_common.sh@10 -- # set +x 00:23:37.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:37.562 Zero copy mechanism will not be used. 00:23:37.562 [2024-11-21 02:40:18.079667] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:37.563 [2024-11-21 02:40:18.079773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87454 ] 00:23:37.821 [2024-11-21 02:40:18.215113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.821 [2024-11-21 02:40:18.302880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.759 02:40:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.759 02:40:19 -- common/autotest_common.sh@862 -- # return 0 00:23:38.759 02:40:19 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:38.759 02:40:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:38.759 02:40:19 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:38.759 02:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.759 02:40:19 -- common/autotest_common.sh@10 -- # set +x 00:23:38.759 02:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.759 02:40:19 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:38.759 02:40:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.019 nvme0n1 00:23:39.019 02:40:19 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:39.019 02:40:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.019 02:40:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.019 02:40:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.019 02:40:19 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:39.019 02:40:19 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:39.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:39.280 Zero copy mechanism will not be used. 00:23:39.280 Running I/O for 2 seconds... 00:23:39.280 [2024-11-21 02:40:19.727444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.727878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.727908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.731495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.731796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.731823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.735618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.735735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.735758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.739643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.739734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.739769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.743876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.744004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.744026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.747935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.748016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.748037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.751955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.752144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.752164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.755917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.756120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.756142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.760006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.760164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.760185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.280 [2024-11-21 02:40:19.764181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.280 [2024-11-21 02:40:19.764340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.280 [2024-11-21 02:40:19.764362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.768173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.768281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.768301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.772132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.772263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.776184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.776261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.776282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.780204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.780347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.780368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.784157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.784365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.784386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.788249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.788403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.788424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.792285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.792390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.792410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.796288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.796439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.796461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.800311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.800446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.800465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.804272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.804364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.804384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.808207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.808298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.808318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.812215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.812337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.812357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.816195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.816421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.820156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.820336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.824171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.824286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.824306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.828118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.828223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.828242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.832155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.832264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.832284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.836124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.836264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.836285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.840028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.840103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.840122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.843969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.844091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.844111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.847851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.848014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.848034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.851872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.852021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.852041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.855714] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.855844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.855863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.859641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.859773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.859793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.863560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.863677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.863697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.867505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.867613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.281 [2024-11-21 02:40:19.867632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.281 [2024-11-21 02:40:19.871483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.281 [2024-11-21 02:40:19.871558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.871579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.875400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.875539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.875559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.879348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.879536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.879555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.883371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.883552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.883571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.887286] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.887389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.887408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.891228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.891348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.891368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.895195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.895305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.895324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.899173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.899260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.899280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.903127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.903219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.903238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.907074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.907203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.907222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.911103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.911270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.911290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.915165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.915311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.915331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.282 [2024-11-21 02:40:19.919102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.282 [2024-11-21 02:40:19.919283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.282 [2024-11-21 02:40:19.919303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.923211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.923362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.923384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.927298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.927394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.927414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.931432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.931525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.931545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.935390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.935499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.935519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.939385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.939514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.939534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.943396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.943583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.943602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.947470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.947652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.947672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.951451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.951554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.955413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.955530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.955549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.959394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.959501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.959520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.963415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.963538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.963558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.967351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.967454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.967474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.971359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.971485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.971505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.975262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.975471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.975490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.543 [2024-11-21 02:40:19.979437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.543 [2024-11-21 02:40:19.979601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.543 [2024-11-21 02:40:19.979621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:19.983356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:19.983469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:19.983490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:19.987271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:19.987377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:19.987398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:19.991279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:19.991396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:19.991417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:19.995206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:19.995288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:19.995308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:19.999149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:19.999273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:19.999292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.003704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.003872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.003906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.008206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.008396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.008417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.012501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.012702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.012723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.016998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.017164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.017185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.022505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.022711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.022762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.027988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.028182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.028216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.032726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.032831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.036819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.036919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.036940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.041018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.041140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.041159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.045042] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.045231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.049145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.049327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.049346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.053362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.053476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.053497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.057445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.057561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.057581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.061588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.061703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.061735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.065681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.065802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.065824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.069651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.069785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.069805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.073721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.073869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.073889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.077654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.077796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.077815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.081726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.081911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.081932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.085713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.085915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.085935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.089756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.089882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.089903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.093732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.544 [2024-11-21 02:40:20.093860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.544 [2024-11-21 02:40:20.093881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.544 [2024-11-21 02:40:20.097766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.097846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.097867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.101715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.101843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.101864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.105756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.105881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.105901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.109678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.109909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.109945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.113730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.113930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.113951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.117712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.117835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.117855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.121608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.121771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.121791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.125606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.125767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.125786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.129647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.129779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.129800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.133704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.133823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.133843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.137760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.137903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.137924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.141692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.141899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.141920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.145767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.145930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.145950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.149794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.149934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.149954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.153702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.153846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.153867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.157777] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.157883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.157903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.161721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.161823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.161844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.165769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.165888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.165909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.169793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.169933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.169955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.173800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.174009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.174029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.177920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.178136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.178158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.545 [2024-11-21 02:40:20.181973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.545 [2024-11-21 02:40:20.182192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.545 [2024-11-21 02:40:20.182213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.186061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.186283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.186307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.190176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.190260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.190282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.194204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.194317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.194339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.198142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.198263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.198284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.202099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.202243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.202265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.206123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.206283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.206305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.210035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.210198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.210219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.213970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.214136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.214157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.217918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.218021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.218040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.221839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.221945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.221965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.225707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.225839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.225860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.229560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.229664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.229684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.233596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.233723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.233744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.237469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.237587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.237606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.241530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.241649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.241669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.245505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.245612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.245631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.249668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.249792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.249814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.253601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.253708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.253728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.257565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.257691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.257712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.261545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.261665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.807 [2024-11-21 02:40:20.261685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.807 [2024-11-21 02:40:20.265508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.807 [2024-11-21 02:40:20.265590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.265610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.269514] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.269654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.269674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.273529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.273682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.273703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.277568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.277858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.277895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.281509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.281633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.281654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.285505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.285654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.285674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.289396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.289529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.293365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.293483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.293504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.297389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.297488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.297508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.301543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.301671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.301691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.305812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.306034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.306081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.309917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.310148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.310171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.314054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.314200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.314222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.318166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.318372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.318407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.322125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.322235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.326204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.326351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.326386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.330298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.330431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.330452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.334295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.334451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.334487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.338350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.338521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.338541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.342328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.342532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.342551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.346280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.346412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.346432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.350228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.350357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.350392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.354223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.354303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.354324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.358066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.358145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.358166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.361983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.362138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.362159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.365856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.366092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.366114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.369834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.370034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.808 [2024-11-21 02:40:20.370081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.808 [2024-11-21 02:40:20.373790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.808 [2024-11-21 02:40:20.373981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.374001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.377713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.377843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.377863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.381734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.381920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.381940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.385698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.385837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.385858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.389594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.389681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.389701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.393554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.393692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.393712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.397567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.397761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.397802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.401633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.401855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.401877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.405513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.405650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.405670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.409420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.409526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.409546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.413434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.413545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.413566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.417405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.417517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.417538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.421307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.421396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.421416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.425276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.425421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.425442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.429338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.429521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.429541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.433446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.433615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.433636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.437380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.437575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.437595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.441319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.441467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.441488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.445305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.445424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.445459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.809 [2024-11-21 02:40:20.449411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:39.809 [2024-11-21 02:40:20.449520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.809 [2024-11-21 02:40:20.449542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.453453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.453565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.070 [2024-11-21 02:40:20.453585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.457531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.457689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.070 [2024-11-21 02:40:20.457711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.461479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.461625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.070 [2024-11-21 02:40:20.461645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.465405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.465509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.070 [2024-11-21 02:40:20.465529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.469431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.469552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.070 [2024-11-21 02:40:20.469572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.473406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.473515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.070 [2024-11-21 02:40:20.473536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.070 [2024-11-21 02:40:20.477342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.070 [2024-11-21 02:40:20.477466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.477486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.481219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.481307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.481328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.485182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.485303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.485322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.489172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.489327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.489347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.493191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.493326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.493345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.497226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.497373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.497393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.501170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.501332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.501352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.505112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.505244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.505264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.509095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.509203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.509223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.513010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.513083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.513103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.516982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.517093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.517112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.520987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.521109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.521129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.524970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.525161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.525180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.529057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.529221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.529241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.533035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.533150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.533171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.536890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.537039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.537058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.540871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.540990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.541010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.544797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.544872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.544892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.548717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.548807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.548827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.552639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.552777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.552797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.556572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.556777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.556797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.560636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.560829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.560849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.564731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.564849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.564868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.568640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.568783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.572615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.572715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.572735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.576549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.576625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.071 [2024-11-21 02:40:20.576645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.071 [2024-11-21 02:40:20.580597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.071 [2024-11-21 02:40:20.580700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.580720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.584634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.584771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.584790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.588605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.588789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.588809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.592683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.592871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.592891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.596653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.596787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.596807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.600539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.600642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.600662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.604537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.604654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.604674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.608558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.608635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.608655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.612480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.612555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.612575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.616462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.616592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.616611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.620415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.620599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.620619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.624435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.624585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.624604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.628359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.628539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.628560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.632255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.632361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.636213] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.636361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.636381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.640215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.640291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.640310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.644172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.644252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.644273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.648230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.648370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.648390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.652232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.652387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.652407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.656327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.656460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.656479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.660273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.660391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.660411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.664228] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.664336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.664355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.668263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.668380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.668400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.672227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.672332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.672351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.676109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.072 [2024-11-21 02:40:20.676184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.072 [2024-11-21 02:40:20.676204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.072 [2024-11-21 02:40:20.680153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.680277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.680297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.684110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.684269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.684289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.688195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.688341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.688361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.692153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.692275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.692296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.696185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.696341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.700234] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.700335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.700355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.704183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.704265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.704285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.708162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.708247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.708267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.073 [2024-11-21 02:40:20.712318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.073 [2024-11-21 02:40:20.712471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.073 [2024-11-21 02:40:20.712491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.716334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.716480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.716499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.720307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.720476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.724402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.724518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.724537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.728339] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.728444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.728463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.732379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.732479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.732499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.736368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.736442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.736462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.740258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.740330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.744255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.744379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.744398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.748196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.748382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.748401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.752296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.752478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.752497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.756198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.756301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.756320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.760203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.760337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.760356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.764262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.764399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.764419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.768242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.768350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.768369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.772225] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.772315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.772334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.776181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.776305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.776325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.780174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.780346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.780366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.784207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.784357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.784375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.788073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.788228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.788248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.791985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.792133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.792153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.795960] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.796101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.796122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.799995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.800075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.800095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.334 [2024-11-21 02:40:20.804070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.334 [2024-11-21 02:40:20.804177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.334 [2024-11-21 02:40:20.804197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.808260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.808387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.808407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.812322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.812419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.812439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.816418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.816534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.816556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.820545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.820892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.820919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.824422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.824512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.824533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.828391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.828541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.828562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.832338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.832425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.832445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.836344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.836464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.836484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.840362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.840504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.840525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.844303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.844456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.844476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.848341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.848535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.848556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.852337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.852566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.852592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.856264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.856373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.860197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.860315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.860335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.864132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.864215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.864236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.868119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.868210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.868231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.872124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.872265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.872285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.876048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.876211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.876231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.880221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.880398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.880418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.884196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.884531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.884558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.888090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.888183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.888203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.892088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.892260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.892280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.896054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.896198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.335 [2024-11-21 02:40:20.896218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.335 [2024-11-21 02:40:20.900030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.335 [2024-11-21 02:40:20.900135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.900170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.904041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.904214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.904234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.908076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.908332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.908359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.912018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.912112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.912133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.916125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.916321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.916341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.920092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.920198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.920218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.924082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.924236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.924256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.928146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.928292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.932113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.932227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.932247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.936049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.936225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.936246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.939969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.940224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.940260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.943912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.944083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.948268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.948474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.952257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.952354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.952375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.956267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.956408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.956428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.960251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.960357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.960377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.964138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.964277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.964297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.968240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.968396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.968417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.972149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.972362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.972383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.336 [2024-11-21 02:40:20.976243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.336 [2024-11-21 02:40:20.976444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.336 [2024-11-21 02:40:20.976465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:20.980413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:20.980588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:20.980608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:20.984472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:20.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:20.984605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:20.988615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:20.988808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:20.988829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:20.992604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:20.992713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:20.992735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:20.996474] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:20.996574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:20.996594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.000486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.000901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.000923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.004730] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.005009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.005059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.008675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.008895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.008916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.013000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.013166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.013186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.016959] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.017058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.021026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.021178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.021199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.025066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.025220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.597 [2024-11-21 02:40:21.025243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.597 [2024-11-21 02:40:21.029092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.597 [2024-11-21 02:40:21.029248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.029268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.033374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.033532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.033552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.037406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.037647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.037673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.041467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.041574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.041594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.045529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.045651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.045672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.049413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.049493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.049512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.053462] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.053594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.053615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.057399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.057511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.057530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.061385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.061487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.061506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.065432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.065599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.065620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.069481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.069702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.069723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.073540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.073755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.073789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.077562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.077713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.077733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.081564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.081644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.081664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.085535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.085704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.085725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.089492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.089616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.089637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.093377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.093481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.093500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.097416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.097569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.097589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.101373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.101542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.101562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.105358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.105544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.105564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.109359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.109477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.109496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.113313] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.113400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.113419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.117302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.117433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.117453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.121269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.121395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.125281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.125389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.125409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.129405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.129555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.598 [2024-11-21 02:40:21.129575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.598 [2024-11-21 02:40:21.133377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.598 [2024-11-21 02:40:21.133561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.133580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.137370] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.137551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.137571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.141192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.141357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.141377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.145080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.145192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.145212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.149035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.149205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.149224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.152993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.153106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.153127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.156891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.156976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.156996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.160874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.161038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.161058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.164799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.165013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.165033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.168758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.168947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.172623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.172769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.172789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.176468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.176573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.176593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.180416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.180539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.180558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.184414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.184497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.184517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.188305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.188385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.188405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.192318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.192461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.192481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.196260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.196426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.196445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.200121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.200212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.200232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.204160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.204293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.204312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.208088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.208161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.208181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.212052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.212193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.212213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.216074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.216154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.216174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.220023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.220097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.220118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.223968] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.224114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.224134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.227878] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.599 [2024-11-21 02:40:21.228082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.599 [2024-11-21 02:40:21.228101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.599 [2024-11-21 02:40:21.231733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.600 [2024-11-21 02:40:21.231840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.600 [2024-11-21 02:40:21.231861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.600 [2024-11-21 02:40:21.235647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.600 [2024-11-21 02:40:21.235869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.600 [2024-11-21 02:40:21.235890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.600 [2024-11-21 02:40:21.239795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.600 [2024-11-21 02:40:21.239931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.600 [2024-11-21 02:40:21.239952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.243865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.243969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.243989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.247946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.248108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.248128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.251836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.251954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.255813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.255969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.255989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.259667] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.259899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.259933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.263664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.263858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.263878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.267586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.267708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.267727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.271518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.860 [2024-11-21 02:40:21.271595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.860 [2024-11-21 02:40:21.271615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.860 [2024-11-21 02:40:21.275481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.275611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.275631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.279438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.279545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.279565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.283363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.283472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.283493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.287372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.287517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.287536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.291181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.291374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.291393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.295145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.295277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.295296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.299123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.299279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.299299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.303040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.303118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.303138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.306985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.307109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.307129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.310902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.311003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.311023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.314782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.314890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.314909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.318750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.318929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.318949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.322684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.322959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.322979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.326468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.326589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.326609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.330636] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.330771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.330792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.334677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.334779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.334801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.338876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.339015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.339037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.342884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.343228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.343252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.346965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.347058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.347095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.351061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.351269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.351288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.355106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.355362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.355387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.359067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.359188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.359207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.363206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.363334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.363354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.367164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.367239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.367259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.371199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.371348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.371368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.375116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.375223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.375242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.861 [2024-11-21 02:40:21.379000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.861 [2024-11-21 02:40:21.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.861 [2024-11-21 02:40:21.379100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.383005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.383168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.383187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.386972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.387173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.387192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.390994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.391159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.391179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.394864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.394979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.394999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.398764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.398863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.398883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.402691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.402865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.402885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.406551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.406657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.406676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.410471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.410560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.410579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.414470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.414662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.418374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.418671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.418696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.422318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.422553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.422574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.426392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.426563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.426582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.430283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.430361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.430396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.434333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.434503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.434522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.438195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.438274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.438295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.442056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.442180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.442201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.446124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.446291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.446312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.450034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.450323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.450360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.453877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.454070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.454107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.457907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.458033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.458076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.461845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.461929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.461950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.465823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.465946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.469756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.469870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.469891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.473700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.473796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.473816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.477632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.477799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.477818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.481623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.481807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.481826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.862 [2024-11-21 02:40:21.485613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.862 [2024-11-21 02:40:21.485807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.862 [2024-11-21 02:40:21.485827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:40.863 [2024-11-21 02:40:21.489522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.863 [2024-11-21 02:40:21.489624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.863 [2024-11-21 02:40:21.489643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:40.863 [2024-11-21 02:40:21.493523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.863 [2024-11-21 02:40:21.493605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.863 [2024-11-21 02:40:21.493624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:40.863 [2024-11-21 02:40:21.497463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.863 [2024-11-21 02:40:21.497588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.863 [2024-11-21 02:40:21.497607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:40.863 [2024-11-21 02:40:21.501551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:40.863 [2024-11-21 02:40:21.501665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.863 [2024-11-21 02:40:21.501700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.123 [2024-11-21 02:40:21.505618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.123 [2024-11-21 02:40:21.505741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.505761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.509633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.509804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.509837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.513617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.513756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.513776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.517554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.517640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.517659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.521615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.521758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.521776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.525560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.525649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.525668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.529569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.529710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.529731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.533485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.533590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.533609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.537408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.537512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.537532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.541360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.541507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.541526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.545273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.545523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.545544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.549182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.549300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.549321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.553206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.553316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.553336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.557183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.557273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.557294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.561210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.561358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.561378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.565176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.565276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.565296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.569076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.569172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.569192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.573147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.573294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.573315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.577190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.577461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.577497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.581154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.581231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.581250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.585113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.585264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.585283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.589010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.589103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.589124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.592991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.593121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.593140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.596911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.596992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.597012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.600831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.600915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.600935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.604803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.604959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.604980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.608807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.124 [2024-11-21 02:40:21.609016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.124 [2024-11-21 02:40:21.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.124 [2024-11-21 02:40:21.612726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.612921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.612940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.616693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.616807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.616827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.620589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.620665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.620684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.624539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.624669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.624689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.628434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.628555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.628575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.632306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.632386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.632406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.636372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.636518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.636538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.640312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.640535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.640554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.644170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.644260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.644279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.648248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.648368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.648389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.652151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.652236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.652255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.656167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.656291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.656310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.660136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.660221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.660241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.664054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.664142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.664161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.668028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.668176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.668196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.671970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.672176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.672194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.675921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.676102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.676123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.679844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.680019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.680040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.683793] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.683884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.683903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.687851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.687975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.687994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.691723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.691825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.691844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.695623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.695730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.699551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.699698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.699718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.703484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.707444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.707624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.707644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.711379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.711500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.711521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:41.125 [2024-11-21 02:40:21.715291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb76a90) with pdu=0x2000190fef90 00:23:41.125 [2024-11-21 02:40:21.715370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.125 [2024-11-21 02:40:21.715390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:41.125 00:23:41.125 Latency(us) 00:23:41.125 [2024-11-21T02:40:21.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.126 [2024-11-21T02:40:21.773Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:41.126 nvme0n1 : 2.00 7727.40 965.93 0.00 0.00 2066.05 1586.27 8340.95 00:23:41.126 [2024-11-21T02:40:21.773Z] =================================================================================================================== 00:23:41.126 [2024-11-21T02:40:21.773Z] Total : 7727.40 965.93 0.00 0.00 2066.05 1586.27 8340.95 00:23:41.126 0 00:23:41.126 02:40:21 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:41.126 02:40:21 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:41.126 02:40:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:41.126 02:40:21 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:41.126 | .driver_specific 00:23:41.126 | .nvme_error 00:23:41.126 | .status_code 00:23:41.126 | .command_transient_transport_error' 00:23:41.385 02:40:21 -- host/digest.sh@71 -- # (( 498 > 0 )) 00:23:41.385 02:40:21 -- host/digest.sh@73 -- # killprocess 87454 00:23:41.385 02:40:21 -- common/autotest_common.sh@936 -- # '[' -z 87454 ']' 00:23:41.385 02:40:21 -- common/autotest_common.sh@940 -- # kill -0 87454 00:23:41.385 02:40:21 -- common/autotest_common.sh@941 -- # uname 00:23:41.385 02:40:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.385 02:40:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87454 00:23:41.385 02:40:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:41.385 killing process with pid 87454 00:23:41.385 02:40:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:41.385 02:40:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87454' 00:23:41.385 02:40:22 -- common/autotest_common.sh@955 -- # kill 87454 00:23:41.385 Received shutdown signal, test time was about 2.000000 seconds 00:23:41.385 00:23:41.385 Latency(us) 00:23:41.385 [2024-11-21T02:40:22.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.385 [2024-11-21T02:40:22.032Z] =================================================================================================================== 00:23:41.385 [2024-11-21T02:40:22.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.385 02:40:22 -- common/autotest_common.sh@960 -- # wait 87454 00:23:41.644 02:40:22 -- host/digest.sh@115 -- # killprocess 87150 00:23:41.644 02:40:22 -- common/autotest_common.sh@936 -- # '[' -z 87150 ']' 00:23:41.644 02:40:22 -- common/autotest_common.sh@940 -- # kill -0 87150 00:23:41.644 02:40:22 -- common/autotest_common.sh@941 -- # uname 00:23:41.644 02:40:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.644 02:40:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87150 00:23:41.902 02:40:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:41.902 02:40:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:41.902 killing process with pid 87150 00:23:41.902 02:40:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87150' 00:23:41.902 02:40:22 -- common/autotest_common.sh@955 -- # kill 87150 00:23:41.902 02:40:22 -- common/autotest_common.sh@960 -- # wait 87150 00:23:42.161 00:23:42.161 real 0m18.518s 00:23:42.161 user 0m34.002s 00:23:42.161 sys 0m5.381s 00:23:42.161 02:40:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:42.161 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:23:42.162 ************************************ 00:23:42.162 END TEST nvmf_digest_error 00:23:42.162 ************************************ 00:23:42.162 02:40:22 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:42.162 02:40:22 -- host/digest.sh@139 -- # nvmftestfini 00:23:42.162 02:40:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:42.162 02:40:22 -- nvmf/common.sh@116 -- # sync 00:23:42.162 02:40:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:42.162 02:40:22 -- nvmf/common.sh@119 -- # set +e 00:23:42.162 02:40:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:42.162 02:40:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:42.162 rmmod nvme_tcp 00:23:42.162 rmmod nvme_fabrics 00:23:42.162 rmmod nvme_keyring 00:23:42.162 02:40:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:42.162 02:40:22 -- nvmf/common.sh@123 -- # set -e 00:23:42.162 02:40:22 -- nvmf/common.sh@124 -- # return 0 00:23:42.162 02:40:22 -- nvmf/common.sh@477 -- # '[' -n 87150 ']' 00:23:42.162 02:40:22 -- nvmf/common.sh@478 -- # killprocess 87150 00:23:42.162 02:40:22 -- common/autotest_common.sh@936 -- # '[' -z 87150 ']' 00:23:42.162 02:40:22 -- common/autotest_common.sh@940 -- # kill -0 87150 00:23:42.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87150) - No such process 00:23:42.162 Process with pid 87150 is not found 00:23:42.162 02:40:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87150 is not found' 00:23:42.162 02:40:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:42.162 02:40:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:42.162 02:40:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:42.162 02:40:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:42.162 02:40:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:42.162 02:40:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.162 02:40:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.162 02:40:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.421 02:40:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:42.421 ************************************ 00:23:42.421 END TEST nvmf_digest 00:23:42.421 ************************************ 00:23:42.421 00:23:42.421 real 0m38.036s 00:23:42.421 user 1m8.481s 00:23:42.421 sys 0m11.220s 00:23:42.421 02:40:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:42.421 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:23:42.421 02:40:22 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:42.421 02:40:22 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:42.421 02:40:22 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:42.421 02:40:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:42.421 02:40:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.421 02:40:22 -- common/autotest_common.sh@10 -- # set +x 00:23:42.421 ************************************ 00:23:42.421 START TEST nvmf_mdns_discovery 00:23:42.421 ************************************ 00:23:42.421 02:40:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:42.421 * Looking for test storage... 00:23:42.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:42.421 02:40:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:42.421 02:40:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:42.421 02:40:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:42.421 02:40:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:42.421 02:40:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:42.421 02:40:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:42.421 02:40:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:42.421 02:40:23 -- scripts/common.sh@335 -- # IFS=.-: 00:23:42.421 02:40:23 -- scripts/common.sh@335 -- # read -ra ver1 00:23:42.421 02:40:23 -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.421 02:40:23 -- scripts/common.sh@336 -- # read -ra ver2 00:23:42.421 02:40:23 -- scripts/common.sh@337 -- # local 'op=<' 00:23:42.421 02:40:23 -- scripts/common.sh@339 -- # ver1_l=2 00:23:42.421 02:40:23 -- scripts/common.sh@340 -- # ver2_l=1 00:23:42.421 02:40:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:42.421 02:40:23 -- scripts/common.sh@343 -- # case "$op" in 00:23:42.421 02:40:23 -- scripts/common.sh@344 -- # : 1 00:23:42.421 02:40:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:42.421 02:40:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.421 02:40:23 -- scripts/common.sh@364 -- # decimal 1 00:23:42.421 02:40:23 -- scripts/common.sh@352 -- # local d=1 00:23:42.421 02:40:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.421 02:40:23 -- scripts/common.sh@354 -- # echo 1 00:23:42.421 02:40:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:42.421 02:40:23 -- scripts/common.sh@365 -- # decimal 2 00:23:42.421 02:40:23 -- scripts/common.sh@352 -- # local d=2 00:23:42.421 02:40:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.421 02:40:23 -- scripts/common.sh@354 -- # echo 2 00:23:42.421 02:40:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:42.421 02:40:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:42.421 02:40:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:42.421 02:40:23 -- scripts/common.sh@367 -- # return 0 00:23:42.421 02:40:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.421 02:40:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.421 --rc genhtml_branch_coverage=1 00:23:42.421 --rc genhtml_function_coverage=1 00:23:42.421 --rc genhtml_legend=1 00:23:42.421 --rc geninfo_all_blocks=1 00:23:42.421 --rc geninfo_unexecuted_blocks=1 00:23:42.421 00:23:42.421 ' 00:23:42.421 02:40:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.421 --rc genhtml_branch_coverage=1 00:23:42.421 --rc genhtml_function_coverage=1 00:23:42.421 --rc genhtml_legend=1 00:23:42.421 --rc geninfo_all_blocks=1 00:23:42.421 --rc geninfo_unexecuted_blocks=1 00:23:42.421 00:23:42.421 ' 00:23:42.421 02:40:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.421 --rc genhtml_branch_coverage=1 00:23:42.421 --rc genhtml_function_coverage=1 00:23:42.421 --rc genhtml_legend=1 00:23:42.421 --rc geninfo_all_blocks=1 00:23:42.421 --rc geninfo_unexecuted_blocks=1 00:23:42.421 00:23:42.421 ' 00:23:42.421 02:40:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:42.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.421 --rc genhtml_branch_coverage=1 00:23:42.421 --rc genhtml_function_coverage=1 00:23:42.421 --rc genhtml_legend=1 00:23:42.421 --rc geninfo_all_blocks=1 00:23:42.421 --rc geninfo_unexecuted_blocks=1 00:23:42.421 00:23:42.421 ' 00:23:42.421 02:40:23 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:42.421 02:40:23 -- nvmf/common.sh@7 -- # uname -s 00:23:42.680 02:40:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.680 02:40:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.680 02:40:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.680 02:40:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.680 02:40:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.680 02:40:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.680 02:40:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.680 02:40:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.680 02:40:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.680 02:40:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.680 02:40:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:23:42.680 02:40:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:23:42.680 02:40:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.680 02:40:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.680 02:40:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:42.680 02:40:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:42.680 02:40:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.680 02:40:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.680 02:40:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.680 02:40:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.680 02:40:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.680 02:40:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.680 02:40:23 -- paths/export.sh@5 -- # export PATH 00:23:42.680 02:40:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.680 02:40:23 -- nvmf/common.sh@46 -- # : 0 00:23:42.680 02:40:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:42.680 02:40:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:42.680 02:40:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:42.680 02:40:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.680 02:40:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.680 02:40:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:42.680 02:40:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:42.680 02:40:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:42.680 02:40:23 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:42.680 02:40:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:42.680 02:40:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.680 02:40:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:42.680 02:40:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:42.680 02:40:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:42.680 02:40:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.680 02:40:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.680 02:40:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.680 02:40:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:42.680 02:40:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:42.680 02:40:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:42.680 02:40:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:42.680 02:40:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:42.680 02:40:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:42.680 02:40:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.680 02:40:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.680 02:40:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:42.680 02:40:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:42.680 02:40:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:42.680 02:40:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:42.680 02:40:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:42.680 02:40:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.680 02:40:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:42.680 02:40:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:42.680 02:40:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:42.681 02:40:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:42.681 02:40:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:42.681 02:40:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:42.681 Cannot find device "nvmf_tgt_br" 00:23:42.681 02:40:23 -- nvmf/common.sh@154 -- # true 00:23:42.681 02:40:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:42.681 Cannot find device "nvmf_tgt_br2" 00:23:42.681 02:40:23 -- nvmf/common.sh@155 -- # true 00:23:42.681 02:40:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:42.681 02:40:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:42.681 Cannot find device "nvmf_tgt_br" 00:23:42.681 02:40:23 -- nvmf/common.sh@157 -- # true 00:23:42.681 02:40:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:42.681 Cannot find device "nvmf_tgt_br2" 00:23:42.681 02:40:23 -- nvmf/common.sh@158 -- # true 00:23:42.681 02:40:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:42.681 02:40:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:42.681 02:40:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:42.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:42.681 02:40:23 -- nvmf/common.sh@161 -- # true 00:23:42.681 02:40:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:42.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:42.681 02:40:23 -- nvmf/common.sh@162 -- # true 00:23:42.681 02:40:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:42.681 02:40:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:42.681 02:40:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:42.681 02:40:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:42.681 02:40:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:42.681 02:40:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:42.681 02:40:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:42.681 02:40:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:42.681 02:40:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:42.940 02:40:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:42.940 02:40:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:42.940 02:40:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:42.940 02:40:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:42.940 02:40:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:42.940 02:40:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:42.940 02:40:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:42.940 02:40:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:42.940 02:40:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:42.940 02:40:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:42.940 02:40:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:42.940 02:40:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:42.940 02:40:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:42.940 02:40:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:42.940 02:40:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:42.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:23:42.940 00:23:42.940 --- 10.0.0.2 ping statistics --- 00:23:42.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.940 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:23:42.940 02:40:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:42.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:42.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:42.940 00:23:42.940 --- 10.0.0.3 ping statistics --- 00:23:42.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.940 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:42.940 02:40:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:42.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:42.940 00:23:42.940 --- 10.0.0.1 ping statistics --- 00:23:42.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.940 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:42.940 02:40:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.940 02:40:23 -- nvmf/common.sh@421 -- # return 0 00:23:42.940 02:40:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:42.940 02:40:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.940 02:40:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:42.940 02:40:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:42.940 02:40:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.940 02:40:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:42.940 02:40:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:42.940 02:40:23 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:42.940 02:40:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:42.940 02:40:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.940 02:40:23 -- common/autotest_common.sh@10 -- # set +x 00:23:42.940 02:40:23 -- nvmf/common.sh@469 -- # nvmfpid=87760 00:23:42.940 02:40:23 -- nvmf/common.sh@470 -- # waitforlisten 87760 00:23:42.941 02:40:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:42.941 02:40:23 -- common/autotest_common.sh@829 -- # '[' -z 87760 ']' 00:23:42.941 02:40:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.941 02:40:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.941 02:40:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.941 02:40:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.941 02:40:23 -- common/autotest_common.sh@10 -- # set +x 00:23:42.941 [2024-11-21 02:40:23.519856] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:42.941 [2024-11-21 02:40:23.519931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.200 [2024-11-21 02:40:23.658371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.200 [2024-11-21 02:40:23.751088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:43.200 [2024-11-21 02:40:23.751270] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.200 [2024-11-21 02:40:23.751288] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.200 [2024-11-21 02:40:23.751300] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.200 [2024-11-21 02:40:23.751334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.135 02:40:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.135 02:40:24 -- common/autotest_common.sh@862 -- # return 0 00:23:44.135 02:40:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:44.135 02:40:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 02:40:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 [2024-11-21 02:40:24.579848] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 [2024-11-21 02:40:24.587974] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 null0 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 null1 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 null2 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 null3 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:44.135 02:40:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 02:40:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@47 -- # hostpid=87809 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:44.135 02:40:24 -- host/mdns_discovery.sh@48 -- # waitforlisten 87809 /tmp/host.sock 00:23:44.135 02:40:24 -- common/autotest_common.sh@829 -- # '[' -z 87809 ']' 00:23:44.135 02:40:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:44.135 02:40:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.135 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:44.135 02:40:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:44.135 02:40:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.135 02:40:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 [2024-11-21 02:40:24.694654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:44.135 [2024-11-21 02:40:24.694769] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87809 ] 00:23:44.394 [2024-11-21 02:40:24.828154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.394 [2024-11-21 02:40:24.917325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:44.394 [2024-11-21 02:40:24.917704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.330 02:40:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.330 02:40:25 -- common/autotest_common.sh@862 -- # return 0 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@57 -- # avahipid=87839 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:45.330 02:40:25 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:45.330 Process 1060 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:45.330 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:45.330 Successfully dropped root privileges. 00:23:45.330 avahi-daemon 0.8 starting up. 00:23:45.330 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:45.330 Successfully called chroot(). 00:23:45.330 Successfully dropped remaining capabilities. 00:23:45.330 No service file found in /etc/avahi/services. 00:23:46.266 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:46.266 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:46.266 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:46.266 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:46.266 Network interface enumeration completed. 00:23:46.266 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:46.266 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:46.266 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:46.266 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:46.266 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 3247138010. 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:46.266 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.266 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.266 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:46.266 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.266 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.266 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:46.266 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.266 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # sort 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # xargs 00:23:46.266 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.266 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.266 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:46.266 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.266 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.266 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.266 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:46.266 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.266 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # sort 00:23:46.266 02:40:26 -- host/mdns_discovery.sh@68 -- # xargs 00:23:46.266 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.528 02:40:26 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:46.528 02:40:26 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.529 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.529 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@64 -- # xargs 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@64 -- # sort 00:23:46.529 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:46.529 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.529 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@68 -- # sort 00:23:46.529 02:40:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.529 02:40:26 -- host/mdns_discovery.sh@68 -- # xargs 00:23:46.529 02:40:26 -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 02:40:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:46.529 [2024-11-21 02:40:27.033473] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@64 -- # sort 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.529 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.529 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@64 -- # xargs 00:23:46.529 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:46.529 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.529 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.529 [2024-11-21 02:40:27.088622] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.529 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.529 02:40:27 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:46.529 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.530 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.530 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.530 02:40:27 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:46.530 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.530 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.530 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.530 02:40:27 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:46.530 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.530 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.530 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.530 02:40:27 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:46.530 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.530 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.530 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.530 02:40:27 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:46.530 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.530 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.530 [2024-11-21 02:40:27.128584] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:46.530 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.530 02:40:27 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:46.530 02:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.530 02:40:27 -- common/autotest_common.sh@10 -- # set +x 00:23:46.530 [2024-11-21 02:40:27.136580] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:46.530 02:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.530 02:40:27 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=87890 00:23:46.531 02:40:27 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:46.531 02:40:27 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:47.470 [2024-11-21 02:40:27.933469] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:47.470 Established under name 'CDC' 00:23:47.730 [2024-11-21 02:40:28.333478] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:47.730 [2024-11-21 02:40:28.333504] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:47.730 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:47.730 cookie is 0 00:23:47.730 is_local: 1 00:23:47.730 our_own: 0 00:23:47.730 wide_area: 0 00:23:47.730 multicast: 1 00:23:47.730 cached: 1 00:23:47.989 [2024-11-21 02:40:28.433474] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:47.989 [2024-11-21 02:40:28.433497] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:47.989 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:47.989 cookie is 0 00:23:47.989 is_local: 1 00:23:47.989 our_own: 0 00:23:47.989 wide_area: 0 00:23:47.989 multicast: 1 00:23:47.989 cached: 1 00:23:48.926 [2024-11-21 02:40:29.343995] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:48.926 [2024-11-21 02:40:29.344022] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:48.926 [2024-11-21 02:40:29.344039] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:48.926 [2024-11-21 02:40:29.430103] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:48.926 [2024-11-21 02:40:29.443668] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:48.926 [2024-11-21 02:40:29.443688] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:48.926 [2024-11-21 02:40:29.443710] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:48.926 [2024-11-21 02:40:29.494373] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:48.926 [2024-11-21 02:40:29.494399] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:48.926 [2024-11-21 02:40:29.531534] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:49.184 [2024-11-21 02:40:29.593101] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:49.184 [2024-11-21 02:40:29.593127] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:51.718 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.718 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@80 -- # sort 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@80 -- # xargs 00:23:51.718 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@76 -- # sort 00:23:51.718 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.718 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@76 -- # xargs 00:23:51.718 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.718 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.718 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@68 -- # sort 00:23:51.718 02:40:32 -- host/mdns_discovery.sh@68 -- # xargs 00:23:51.719 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.719 02:40:32 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:51.719 02:40:32 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:51.719 02:40:32 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:51.719 02:40:32 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.719 02:40:32 -- host/mdns_discovery.sh@64 -- # sort 00:23:51.719 02:40:32 -- host/mdns_discovery.sh@64 -- # xargs 00:23:51.719 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.719 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.719 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:51.978 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:51.978 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # xargs 00:23:51.978 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:51.978 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:51.978 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@72 -- # xargs 00:23:51.978 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:51.978 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.978 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.978 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:51.978 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.978 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.978 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:51.978 02:40:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.978 02:40:32 -- common/autotest_common.sh@10 -- # set +x 00:23:51.978 02:40:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.978 02:40:32 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:52.914 02:40:33 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:52.914 02:40:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.914 02:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.914 02:40:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:52.914 02:40:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.914 02:40:33 -- host/mdns_discovery.sh@64 -- # sort 00:23:52.914 02:40:33 -- host/mdns_discovery.sh@64 -- # xargs 00:23:53.173 02:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:53.173 02:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.173 02:40:33 -- common/autotest_common.sh@10 -- # set +x 00:23:53.173 02:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:53.173 02:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.173 02:40:33 -- common/autotest_common.sh@10 -- # set +x 00:23:53.173 [2024-11-21 02:40:33.667295] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:53.173 [2024-11-21 02:40:33.667766] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:53.173 [2024-11-21 02:40:33.667808] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:53.173 [2024-11-21 02:40:33.667838] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:53.173 [2024-11-21 02:40:33.667850] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:53.173 02:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:53.173 02:40:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.173 02:40:33 -- common/autotest_common.sh@10 -- # set +x 00:23:53.173 [2024-11-21 02:40:33.675208] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:53.173 [2024-11-21 02:40:33.675818] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:53.173 [2024-11-21 02:40:33.675868] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:53.173 02:40:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.173 02:40:33 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:53.173 [2024-11-21 02:40:33.805892] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:53.173 [2024-11-21 02:40:33.806902] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:53.432 [2024-11-21 02:40:33.867068] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:53.432 [2024-11-21 02:40:33.867088] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:53.432 [2024-11-21 02:40:33.867094] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:53.432 [2024-11-21 02:40:33.867109] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:53.432 [2024-11-21 02:40:33.867203] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:53.432 [2024-11-21 02:40:33.867211] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:53.432 [2024-11-21 02:40:33.867216] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:53.432 [2024-11-21 02:40:33.867227] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:53.432 [2024-11-21 02:40:33.912975] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:53.432 [2024-11-21 02:40:33.913097] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:53.432 [2024-11-21 02:40:33.913152] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:53.432 [2024-11-21 02:40:33.913161] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.368 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@68 -- # sort 00:23:54.368 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@68 -- # xargs 00:23:54.368 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:54.368 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.368 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@64 -- # sort 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@64 -- # xargs 00:23:54.368 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:54.368 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.368 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@72 -- # xargs 00:23:54.368 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.368 02:40:34 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:54.369 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@72 -- # xargs 00:23:54.369 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.369 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:54.369 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.369 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:54.369 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.369 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.369 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.369 [2024-11-21 02:40:34.979958] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.369 [2024-11-21 02:40:34.980006] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.369 [2024-11-21 02:40:34.980046] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:54.369 [2024-11-21 02:40:34.980059] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:54.369 [2024-11-21 02:40:34.982644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.982843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.983129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.983248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.983436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.983485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.983591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.983724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.983942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.369 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:54.369 02:40:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.369 02:40:34 -- common/autotest_common.sh@10 -- # set +x 00:23:54.369 [2024-11-21 02:40:34.992070] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.369 [2024-11-21 02:40:34.992124] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:54.369 [2024-11-21 02:40:34.992602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.369 [2024-11-21 02:40:34.994847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.994876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.994888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.994896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.994904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.994912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.994921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.369 [2024-11-21 02:40:34.994929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.369 [2024-11-21 02:40:34.994937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.369 02:40:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.369 02:40:34 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:54.369 [2024-11-21 02:40:35.002621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.369 [2024-11-21 02:40:35.002867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.369 [2024-11-21 02:40:35.002913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.369 [2024-11-21 02:40:35.002928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.369 [2024-11-21 02:40:35.002938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.369 [2024-11-21 02:40:35.002955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.369 [2024-11-21 02:40:35.002968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.369 [2024-11-21 02:40:35.002978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.369 [2024-11-21 02:40:35.002987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.369 [2024-11-21 02:40:35.003003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.369 [2024-11-21 02:40:35.004816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.630 [2024-11-21 02:40:35.012823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.630 [2024-11-21 02:40:35.012898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.630 [2024-11-21 02:40:35.012938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.630 [2024-11-21 02:40:35.012953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.630 [2024-11-21 02:40:35.012962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.630 [2024-11-21 02:40:35.012976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.630 [2024-11-21 02:40:35.012988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.630 [2024-11-21 02:40:35.012997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.630 [2024-11-21 02:40:35.013005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.630 [2024-11-21 02:40:35.013018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.630 [2024-11-21 02:40:35.014824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.630 [2024-11-21 02:40:35.014896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.630 [2024-11-21 02:40:35.014935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.630 [2024-11-21 02:40:35.014949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.630 [2024-11-21 02:40:35.014958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.630 [2024-11-21 02:40:35.014971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.630 [2024-11-21 02:40:35.014983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.630 [2024-11-21 02:40:35.014991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.630 [2024-11-21 02:40:35.014999] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.630 [2024-11-21 02:40:35.015011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.630 [2024-11-21 02:40:35.022868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.631 [2024-11-21 02:40:35.022937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.022975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.022989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.631 [2024-11-21 02:40:35.022998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.023012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.023024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.023031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.023039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.631 [2024-11-21 02:40:35.023051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.024870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.631 [2024-11-21 02:40:35.024937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.024975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.024989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.631 [2024-11-21 02:40:35.024999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.025012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.025024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.025032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.025040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.631 [2024-11-21 02:40:35.025053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.032913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.631 [2024-11-21 02:40:35.032984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.033023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.033037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.631 [2024-11-21 02:40:35.033047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.033061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.033073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.033080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.033088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.631 [2024-11-21 02:40:35.033101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.034914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.631 [2024-11-21 02:40:35.034984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.035025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.035039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.631 [2024-11-21 02:40:35.035049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.035063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.035076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.035084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.035098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.631 [2024-11-21 02:40:35.035126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.042959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.631 [2024-11-21 02:40:35.043036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.043075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.043090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.631 [2024-11-21 02:40:35.043099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.043113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.043125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.043133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.043141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.631 [2024-11-21 02:40:35.043154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.044959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.631 [2024-11-21 02:40:35.045027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.045066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.045080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.631 [2024-11-21 02:40:35.045089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.045102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.045114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.045122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.045130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.631 [2024-11-21 02:40:35.045142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.053006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.631 [2024-11-21 02:40:35.053198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.053240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.053254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.631 [2024-11-21 02:40:35.053264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.053278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.053307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.053317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.053325] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.631 [2024-11-21 02:40:35.053338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.055004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.631 [2024-11-21 02:40:35.055077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.055132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.055146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.631 [2024-11-21 02:40:35.055156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.055169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.055181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.055189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.055196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.631 [2024-11-21 02:40:35.055210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.063162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.631 [2024-11-21 02:40:35.063232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.063270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.063284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.631 [2024-11-21 02:40:35.063293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.631 [2024-11-21 02:40:35.063307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.631 [2024-11-21 02:40:35.063332] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.631 [2024-11-21 02:40:35.063341] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.631 [2024-11-21 02:40:35.063349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.631 [2024-11-21 02:40:35.063362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.631 [2024-11-21 02:40:35.065051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.631 [2024-11-21 02:40:35.065120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.065173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.631 [2024-11-21 02:40:35.065187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.632 [2024-11-21 02:40:35.065196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.065210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.065222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.065229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.065237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.632 [2024-11-21 02:40:35.065250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.073207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.632 [2024-11-21 02:40:35.073277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.073315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.073328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.632 [2024-11-21 02:40:35.073337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.073351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.073377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.073386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.073395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.632 [2024-11-21 02:40:35.073407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.075095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.632 [2024-11-21 02:40:35.075196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.075235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.075249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.632 [2024-11-21 02:40:35.075258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.075271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.075283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.075291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.075298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.632 [2024-11-21 02:40:35.075311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.083254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.632 [2024-11-21 02:40:35.083330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.083369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.083383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.632 [2024-11-21 02:40:35.083392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.083406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.083452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.083463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.083471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.632 [2024-11-21 02:40:35.083484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.085140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.632 [2024-11-21 02:40:35.085208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.085247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.085261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.632 [2024-11-21 02:40:35.085270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.085284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.085296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.085304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.085312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.632 [2024-11-21 02:40:35.085324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.093301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.632 [2024-11-21 02:40:35.093371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.093410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.093424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.632 [2024-11-21 02:40:35.093433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.093447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.093473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.093482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.093490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.632 [2024-11-21 02:40:35.093502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.095182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.632 [2024-11-21 02:40:35.095250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.095289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.095303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.632 [2024-11-21 02:40:35.095311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.095325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.095337] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.095345] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.095352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.632 [2024-11-21 02:40:35.095365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.103346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.632 [2024-11-21 02:40:35.103416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.103454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.103468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.632 [2024-11-21 02:40:35.103477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.103491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.103516] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.103525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.103533] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.632 [2024-11-21 02:40:35.103546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.105227] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.632 [2024-11-21 02:40:35.105415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.105456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.105471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.632 [2024-11-21 02:40:35.105481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.105495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.632 [2024-11-21 02:40:35.105524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.632 [2024-11-21 02:40:35.105533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.632 [2024-11-21 02:40:35.105541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.632 [2024-11-21 02:40:35.105555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.632 [2024-11-21 02:40:35.113392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.632 [2024-11-21 02:40:35.113463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.113501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.632 [2024-11-21 02:40:35.113515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.632 [2024-11-21 02:40:35.113524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.632 [2024-11-21 02:40:35.113538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.633 [2024-11-21 02:40:35.113564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.633 [2024-11-21 02:40:35.113573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.633 [2024-11-21 02:40:35.113581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.633 [2024-11-21 02:40:35.113595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.633 [2024-11-21 02:40:35.115380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:54.633 [2024-11-21 02:40:35.115449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.633 [2024-11-21 02:40:35.115488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.633 [2024-11-21 02:40:35.115502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228a410 with addr=10.0.0.3, port=4420 00:23:54.633 [2024-11-21 02:40:35.115511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228a410 is same with the state(5) to be set 00:23:54.633 [2024-11-21 02:40:35.115525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228a410 (9): Bad file descriptor 00:23:54.633 [2024-11-21 02:40:35.115538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:54.633 [2024-11-21 02:40:35.115545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:54.633 [2024-11-21 02:40:35.115552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:54.633 [2024-11-21 02:40:35.115578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.633 [2024-11-21 02:40:35.123436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.633 [2024-11-21 02:40:35.123620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.633 [2024-11-21 02:40:35.123662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.633 [2024-11-21 02:40:35.123676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eeb70 with addr=10.0.0.2, port=4420 00:23:54.633 [2024-11-21 02:40:35.123686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eeb70 is same with the state(5) to be set 00:23:54.633 [2024-11-21 02:40:35.123730] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:54.633 [2024-11-21 02:40:35.123783] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.633 [2024-11-21 02:40:35.123804] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.633 [2024-11-21 02:40:35.123836] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:54.633 [2024-11-21 02:40:35.123850] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:54.633 [2024-11-21 02:40:35.123862] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:54.633 [2024-11-21 02:40:35.123891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eeb70 (9): Bad file descriptor 00:23:54.633 [2024-11-21 02:40:35.123932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.633 [2024-11-21 02:40:35.123942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.633 [2024-11-21 02:40:35.123951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.633 [2024-11-21 02:40:35.123975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.633 [2024-11-21 02:40:35.209663] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.633 [2024-11-21 02:40:35.210668] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:55.571 02:40:35 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:55.571 02:40:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.571 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@68 -- # sort 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@68 -- # xargs 00:23:55.571 02:40:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.571 02:40:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.571 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@64 -- # sort 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@64 -- # xargs 00:23:55.571 02:40:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.571 02:40:36 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.572 02:40:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.572 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # xargs 00:23:55.572 02:40:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:55.572 02:40:36 -- host/mdns_discovery.sh@72 -- # xargs 00:23:55.572 02:40:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.572 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:55.572 02:40:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:55.830 02:40:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.830 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:55.830 02:40:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:55.830 02:40:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.830 02:40:36 -- common/autotest_common.sh@10 -- # set +x 00:23:55.830 02:40:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.830 02:40:36 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:55.830 [2024-11-21 02:40:36.333487] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:56.766 02:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.766 02:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@80 -- # sort 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@80 -- # xargs 00:23:56.766 02:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:56.766 02:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.766 02:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@68 -- # sort 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@68 -- # xargs 00:23:56.766 02:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@64 -- # sort 00:23:56.766 02:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.766 02:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:56.766 02:40:37 -- host/mdns_discovery.sh@64 -- # xargs 00:23:57.025 02:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:57.025 02:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:57.025 02:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:57.025 02:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:57.025 02:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.025 02:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:57.025 02:40:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.025 02:40:37 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:57.025 02:40:37 -- common/autotest_common.sh@650 -- # local es=0 00:23:57.025 02:40:37 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:57.025 02:40:37 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:57.025 02:40:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.025 02:40:37 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:57.025 02:40:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.025 02:40:37 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:57.025 02:40:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.025 02:40:37 -- common/autotest_common.sh@10 -- # set +x 00:23:57.025 [2024-11-21 02:40:37.514467] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:57.025 2024/11/21 02:40:37 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:57.025 request: 00:23:57.025 { 00:23:57.025 "method": "bdev_nvme_start_mdns_discovery", 00:23:57.025 "params": { 00:23:57.025 "name": "mdns", 00:23:57.025 "svcname": "_nvme-disc._http", 00:23:57.025 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:57.025 } 00:23:57.025 } 00:23:57.025 Got JSON-RPC error response 00:23:57.025 GoRPCClient: error on JSON-RPC call 00:23:57.026 02:40:37 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:57.026 02:40:37 -- common/autotest_common.sh@653 -- # es=1 00:23:57.026 02:40:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.026 02:40:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.026 02:40:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.026 02:40:37 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:57.284 [2024-11-21 02:40:37.903008] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:57.543 [2024-11-21 02:40:38.003007] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:57.543 [2024-11-21 02:40:38.103012] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:57.543 [2024-11-21 02:40:38.103161] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:57.543 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:57.543 cookie is 0 00:23:57.543 is_local: 1 00:23:57.543 our_own: 0 00:23:57.543 wide_area: 0 00:23:57.543 multicast: 1 00:23:57.543 cached: 1 00:23:57.801 [2024-11-21 02:40:38.203011] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:57.801 [2024-11-21 02:40:38.203171] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:57.801 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:57.801 cookie is 0 00:23:57.801 is_local: 1 00:23:57.801 our_own: 0 00:23:57.801 wide_area: 0 00:23:57.801 multicast: 1 00:23:57.801 cached: 1 00:23:58.766 [2024-11-21 02:40:39.115431] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:58.766 [2024-11-21 02:40:39.115574] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:58.766 [2024-11-21 02:40:39.115626] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:58.766 [2024-11-21 02:40:39.201521] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:58.766 [2024-11-21 02:40:39.215287] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:58.766 [2024-11-21 02:40:39.215420] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:58.766 [2024-11-21 02:40:39.215471] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.766 [2024-11-21 02:40:39.270184] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:58.766 [2024-11-21 02:40:39.270348] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:58.766 [2024-11-21 02:40:39.301527] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:58.766 [2024-11-21 02:40:39.360301] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:58.766 [2024-11-21 02:40:39.360466] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@80 -- # sort 00:24:02.075 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@80 -- # xargs 00:24:02.075 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.075 02:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:02.075 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.075 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@76 -- # sort 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@76 -- # xargs 00:24:02.075 02:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@64 -- # xargs 00:24:02.075 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@64 -- # sort 00:24:02.075 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.075 02:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:02.075 02:40:42 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:02.075 02:40:42 -- common/autotest_common.sh@650 -- # local es=0 00:24:02.075 02:40:42 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:02.075 02:40:42 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:02.075 02:40:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.075 02:40:42 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:02.075 02:40:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:02.076 02:40:42 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:02.076 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.076 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.076 [2024-11-21 02:40:42.692034] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:24:02.076 2024/11/21 02:40:42 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:02.076 request: 00:24:02.076 { 00:24:02.076 "method": "bdev_nvme_start_mdns_discovery", 00:24:02.076 "params": { 00:24:02.076 "name": "cdc", 00:24:02.076 "svcname": "_nvme-disc._tcp", 00:24:02.076 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:02.076 } 00:24:02.076 } 00:24:02.076 Got JSON-RPC error response 00:24:02.076 GoRPCClient: error on JSON-RPC call 00:24:02.076 02:40:42 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:02.076 02:40:42 -- common/autotest_common.sh@653 -- # es=1 00:24:02.076 02:40:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:02.076 02:40:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:02.076 02:40:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:02.076 02:40:42 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:24:02.076 02:40:42 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:02.076 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.076 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.076 02:40:42 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:02.076 02:40:42 -- host/mdns_discovery.sh@76 -- # sort 00:24:02.076 02:40:42 -- host/mdns_discovery.sh@76 -- # xargs 00:24:02.334 02:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@64 -- # sort 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:02.334 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@64 -- # xargs 00:24:02.334 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.334 02:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:02.334 02:40:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.334 02:40:42 -- common/autotest_common.sh@10 -- # set +x 00:24:02.334 02:40:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@197 -- # kill 87809 00:24:02.334 02:40:42 -- host/mdns_discovery.sh@200 -- # wait 87809 00:24:02.334 [2024-11-21 02:40:42.961335] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:02.593 02:40:43 -- host/mdns_discovery.sh@201 -- # kill 87890 00:24:02.593 Got SIGTERM, quitting. 00:24:02.593 02:40:43 -- host/mdns_discovery.sh@202 -- # kill 87839 00:24:02.593 02:40:43 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:24:02.593 02:40:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:02.593 02:40:43 -- nvmf/common.sh@116 -- # sync 00:24:02.593 Got SIGTERM, quitting. 00:24:02.593 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:02.593 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:02.593 avahi-daemon 0.8 exiting. 00:24:02.593 02:40:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:02.593 02:40:43 -- nvmf/common.sh@119 -- # set +e 00:24:02.593 02:40:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:02.593 02:40:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:02.593 rmmod nvme_tcp 00:24:02.593 rmmod nvme_fabrics 00:24:02.593 rmmod nvme_keyring 00:24:02.593 02:40:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:02.593 02:40:43 -- nvmf/common.sh@123 -- # set -e 00:24:02.593 02:40:43 -- nvmf/common.sh@124 -- # return 0 00:24:02.593 02:40:43 -- nvmf/common.sh@477 -- # '[' -n 87760 ']' 00:24:02.593 02:40:43 -- nvmf/common.sh@478 -- # killprocess 87760 00:24:02.593 02:40:43 -- common/autotest_common.sh@936 -- # '[' -z 87760 ']' 00:24:02.593 02:40:43 -- common/autotest_common.sh@940 -- # kill -0 87760 00:24:02.593 02:40:43 -- common/autotest_common.sh@941 -- # uname 00:24:02.593 02:40:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:02.593 02:40:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87760 00:24:02.852 02:40:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:02.852 killing process with pid 87760 00:24:02.852 02:40:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:02.852 02:40:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87760' 00:24:02.852 02:40:43 -- common/autotest_common.sh@955 -- # kill 87760 00:24:02.852 02:40:43 -- common/autotest_common.sh@960 -- # wait 87760 00:24:02.852 02:40:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:02.852 02:40:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:02.852 02:40:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:02.852 02:40:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.852 02:40:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:02.852 02:40:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.852 02:40:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.852 02:40:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.852 02:40:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:03.109 00:24:03.110 real 0m20.619s 00:24:03.110 user 0m40.126s 00:24:03.110 sys 0m2.029s 00:24:03.110 02:40:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:03.110 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:24:03.110 ************************************ 00:24:03.110 END TEST nvmf_mdns_discovery 00:24:03.110 ************************************ 00:24:03.110 02:40:43 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:24:03.110 02:40:43 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:03.110 02:40:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:03.110 02:40:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:03.110 02:40:43 -- common/autotest_common.sh@10 -- # set +x 00:24:03.110 ************************************ 00:24:03.110 START TEST nvmf_multipath 00:24:03.110 ************************************ 00:24:03.110 02:40:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:03.110 * Looking for test storage... 00:24:03.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:03.110 02:40:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:03.110 02:40:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:03.110 02:40:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:03.110 02:40:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:03.110 02:40:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:03.110 02:40:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:03.110 02:40:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:03.110 02:40:43 -- scripts/common.sh@335 -- # IFS=.-: 00:24:03.110 02:40:43 -- scripts/common.sh@335 -- # read -ra ver1 00:24:03.110 02:40:43 -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.110 02:40:43 -- scripts/common.sh@336 -- # read -ra ver2 00:24:03.110 02:40:43 -- scripts/common.sh@337 -- # local 'op=<' 00:24:03.110 02:40:43 -- scripts/common.sh@339 -- # ver1_l=2 00:24:03.110 02:40:43 -- scripts/common.sh@340 -- # ver2_l=1 00:24:03.110 02:40:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:03.110 02:40:43 -- scripts/common.sh@343 -- # case "$op" in 00:24:03.110 02:40:43 -- scripts/common.sh@344 -- # : 1 00:24:03.110 02:40:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:03.110 02:40:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.110 02:40:43 -- scripts/common.sh@364 -- # decimal 1 00:24:03.110 02:40:43 -- scripts/common.sh@352 -- # local d=1 00:24:03.110 02:40:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.110 02:40:43 -- scripts/common.sh@354 -- # echo 1 00:24:03.110 02:40:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:03.110 02:40:43 -- scripts/common.sh@365 -- # decimal 2 00:24:03.110 02:40:43 -- scripts/common.sh@352 -- # local d=2 00:24:03.110 02:40:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.110 02:40:43 -- scripts/common.sh@354 -- # echo 2 00:24:03.110 02:40:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:03.110 02:40:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:03.110 02:40:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:03.110 02:40:43 -- scripts/common.sh@367 -- # return 0 00:24:03.110 02:40:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.110 02:40:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:03.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.110 --rc genhtml_branch_coverage=1 00:24:03.110 --rc genhtml_function_coverage=1 00:24:03.110 --rc genhtml_legend=1 00:24:03.110 --rc geninfo_all_blocks=1 00:24:03.110 --rc geninfo_unexecuted_blocks=1 00:24:03.110 00:24:03.110 ' 00:24:03.110 02:40:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:03.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.110 --rc genhtml_branch_coverage=1 00:24:03.110 --rc genhtml_function_coverage=1 00:24:03.110 --rc genhtml_legend=1 00:24:03.110 --rc geninfo_all_blocks=1 00:24:03.110 --rc geninfo_unexecuted_blocks=1 00:24:03.110 00:24:03.110 ' 00:24:03.110 02:40:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:03.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.110 --rc genhtml_branch_coverage=1 00:24:03.110 --rc genhtml_function_coverage=1 00:24:03.110 --rc genhtml_legend=1 00:24:03.110 --rc geninfo_all_blocks=1 00:24:03.110 --rc geninfo_unexecuted_blocks=1 00:24:03.110 00:24:03.110 ' 00:24:03.110 02:40:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:03.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.110 --rc genhtml_branch_coverage=1 00:24:03.110 --rc genhtml_function_coverage=1 00:24:03.110 --rc genhtml_legend=1 00:24:03.110 --rc geninfo_all_blocks=1 00:24:03.110 --rc geninfo_unexecuted_blocks=1 00:24:03.110 00:24:03.110 ' 00:24:03.110 02:40:43 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:03.110 02:40:43 -- nvmf/common.sh@7 -- # uname -s 00:24:03.110 02:40:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.110 02:40:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.110 02:40:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.110 02:40:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.110 02:40:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.110 02:40:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.110 02:40:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.110 02:40:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.110 02:40:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.110 02:40:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.110 02:40:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:24:03.110 02:40:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:24:03.110 02:40:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.110 02:40:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.110 02:40:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:03.110 02:40:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.110 02:40:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.110 02:40:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.110 02:40:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.110 02:40:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.110 02:40:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.110 02:40:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.110 02:40:43 -- paths/export.sh@5 -- # export PATH 00:24:03.110 02:40:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.110 02:40:43 -- nvmf/common.sh@46 -- # : 0 00:24:03.110 02:40:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:03.110 02:40:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:03.110 02:40:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:03.110 02:40:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.110 02:40:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.110 02:40:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:03.110 02:40:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:03.110 02:40:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:03.110 02:40:43 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:03.110 02:40:43 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:03.110 02:40:43 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:03.110 02:40:43 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:03.110 02:40:43 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.111 02:40:43 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:03.111 02:40:43 -- host/multipath.sh@30 -- # nvmftestinit 00:24:03.111 02:40:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:03.111 02:40:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.111 02:40:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:03.111 02:40:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:03.111 02:40:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:03.111 02:40:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.111 02:40:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.111 02:40:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.111 02:40:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:03.111 02:40:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:03.111 02:40:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:03.111 02:40:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:03.111 02:40:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:03.111 02:40:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:03.111 02:40:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.111 02:40:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.111 02:40:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:03.111 02:40:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:03.111 02:40:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:03.111 02:40:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:03.111 02:40:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:03.111 02:40:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.111 02:40:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:03.111 02:40:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:03.111 02:40:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:03.111 02:40:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:03.111 02:40:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:03.369 02:40:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:03.369 Cannot find device "nvmf_tgt_br" 00:24:03.369 02:40:43 -- nvmf/common.sh@154 -- # true 00:24:03.369 02:40:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.369 Cannot find device "nvmf_tgt_br2" 00:24:03.369 02:40:43 -- nvmf/common.sh@155 -- # true 00:24:03.369 02:40:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:03.369 02:40:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:03.369 Cannot find device "nvmf_tgt_br" 00:24:03.369 02:40:43 -- nvmf/common.sh@157 -- # true 00:24:03.369 02:40:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:03.369 Cannot find device "nvmf_tgt_br2" 00:24:03.369 02:40:43 -- nvmf/common.sh@158 -- # true 00:24:03.369 02:40:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:03.369 02:40:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:03.369 02:40:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.369 02:40:43 -- nvmf/common.sh@161 -- # true 00:24:03.369 02:40:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.369 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.369 02:40:43 -- nvmf/common.sh@162 -- # true 00:24:03.369 02:40:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:03.369 02:40:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:03.369 02:40:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:03.369 02:40:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:03.369 02:40:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:03.369 02:40:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:03.369 02:40:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:03.369 02:40:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:03.369 02:40:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:03.369 02:40:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:03.369 02:40:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:03.369 02:40:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:03.369 02:40:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:03.369 02:40:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:03.369 02:40:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:03.369 02:40:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:03.369 02:40:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:03.369 02:40:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:03.369 02:40:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:03.628 02:40:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:03.628 02:40:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:03.628 02:40:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:03.628 02:40:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:03.628 02:40:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:03.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:24:03.628 00:24:03.628 --- 10.0.0.2 ping statistics --- 00:24:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.628 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:03.628 02:40:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:03.628 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:03.628 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:24:03.628 00:24:03.628 --- 10.0.0.3 ping statistics --- 00:24:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.628 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:03.628 02:40:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:03.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:03.628 00:24:03.628 --- 10.0.0.1 ping statistics --- 00:24:03.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.628 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:03.628 02:40:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.628 02:40:44 -- nvmf/common.sh@421 -- # return 0 00:24:03.628 02:40:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:03.628 02:40:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.628 02:40:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:03.628 02:40:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:03.628 02:40:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.628 02:40:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:03.628 02:40:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:03.628 02:40:44 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:03.628 02:40:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:03.628 02:40:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:03.628 02:40:44 -- common/autotest_common.sh@10 -- # set +x 00:24:03.628 02:40:44 -- nvmf/common.sh@469 -- # nvmfpid=88412 00:24:03.628 02:40:44 -- nvmf/common.sh@470 -- # waitforlisten 88412 00:24:03.628 02:40:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:03.628 02:40:44 -- common/autotest_common.sh@829 -- # '[' -z 88412 ']' 00:24:03.628 02:40:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.628 02:40:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.628 02:40:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.628 02:40:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.628 02:40:44 -- common/autotest_common.sh@10 -- # set +x 00:24:03.628 [2024-11-21 02:40:44.146255] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:03.628 [2024-11-21 02:40:44.146340] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.887 [2024-11-21 02:40:44.284199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:03.887 [2024-11-21 02:40:44.371249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:03.887 [2024-11-21 02:40:44.371393] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.887 [2024-11-21 02:40:44.371406] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.887 [2024-11-21 02:40:44.371414] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.887 [2024-11-21 02:40:44.371726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.887 [2024-11-21 02:40:44.371787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.824 02:40:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.824 02:40:45 -- common/autotest_common.sh@862 -- # return 0 00:24:04.824 02:40:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:04.824 02:40:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.824 02:40:45 -- common/autotest_common.sh@10 -- # set +x 00:24:04.824 02:40:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.824 02:40:45 -- host/multipath.sh@33 -- # nvmfapp_pid=88412 00:24:04.824 02:40:45 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:04.824 [2024-11-21 02:40:45.459335] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.083 02:40:45 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:05.083 Malloc0 00:24:05.083 02:40:45 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:05.342 02:40:45 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.601 02:40:46 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.861 [2024-11-21 02:40:46.304484] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.861 02:40:46 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:06.120 [2024-11-21 02:40:46.508577] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:06.120 02:40:46 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:06.120 02:40:46 -- host/multipath.sh@44 -- # bdevperf_pid=88506 00:24:06.120 02:40:46 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.120 02:40:46 -- host/multipath.sh@47 -- # waitforlisten 88506 /var/tmp/bdevperf.sock 00:24:06.120 02:40:46 -- common/autotest_common.sh@829 -- # '[' -z 88506 ']' 00:24:06.120 02:40:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.120 02:40:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.120 02:40:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.120 02:40:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.120 02:40:46 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 02:40:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.056 02:40:47 -- common/autotest_common.sh@862 -- # return 0 00:24:07.056 02:40:47 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:07.315 02:40:47 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:07.884 Nvme0n1 00:24:07.884 02:40:48 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:08.142 Nvme0n1 00:24:08.142 02:40:48 -- host/multipath.sh@78 -- # sleep 1 00:24:08.142 02:40:48 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:09.091 02:40:49 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:09.091 02:40:49 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.350 02:40:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:09.608 02:40:50 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:09.608 02:40:50 -- host/multipath.sh@65 -- # dtrace_pid=88599 00:24:09.608 02:40:50 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:09.608 02:40:50 -- host/multipath.sh@66 -- # sleep 6 00:24:16.175 02:40:56 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:16.175 02:40:56 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:16.175 02:40:56 -- host/multipath.sh@67 -- # active_port=4421 00:24:16.175 02:40:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:16.175 Attaching 4 probes... 00:24:16.175 @path[10.0.0.2, 4421]: 21972 00:24:16.175 @path[10.0.0.2, 4421]: 22518 00:24:16.175 @path[10.0.0.2, 4421]: 22377 00:24:16.175 @path[10.0.0.2, 4421]: 22445 00:24:16.175 @path[10.0.0.2, 4421]: 22472 00:24:16.175 02:40:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:16.175 02:40:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:16.175 02:40:56 -- host/multipath.sh@69 -- # sed -n 1p 00:24:16.175 02:40:56 -- host/multipath.sh@69 -- # port=4421 00:24:16.175 02:40:56 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:16.175 02:40:56 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:16.175 02:40:56 -- host/multipath.sh@72 -- # kill 88599 00:24:16.175 02:40:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:16.175 02:40:56 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:16.175 02:40:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:16.175 02:40:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:16.175 02:40:56 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:16.175 02:40:56 -- host/multipath.sh@65 -- # dtrace_pid=88731 00:24:16.175 02:40:56 -- host/multipath.sh@66 -- # sleep 6 00:24:16.175 02:40:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:22.740 02:41:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:22.740 02:41:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:22.740 02:41:03 -- host/multipath.sh@67 -- # active_port=4420 00:24:22.740 02:41:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.740 Attaching 4 probes... 00:24:22.740 @path[10.0.0.2, 4420]: 22610 00:24:22.740 @path[10.0.0.2, 4420]: 22686 00:24:22.740 @path[10.0.0.2, 4420]: 22444 00:24:22.740 @path[10.0.0.2, 4420]: 22425 00:24:22.740 @path[10.0.0.2, 4420]: 22462 00:24:22.740 02:41:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:22.740 02:41:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:22.740 02:41:03 -- host/multipath.sh@69 -- # sed -n 1p 00:24:22.740 02:41:03 -- host/multipath.sh@69 -- # port=4420 00:24:22.740 02:41:03 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:22.740 02:41:03 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:22.740 02:41:03 -- host/multipath.sh@72 -- # kill 88731 00:24:22.740 02:41:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:22.740 02:41:03 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:22.740 02:41:03 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:22.740 02:41:03 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:23.308 02:41:03 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:23.308 02:41:03 -- host/multipath.sh@65 -- # dtrace_pid=88866 00:24:23.308 02:41:03 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:23.308 02:41:03 -- host/multipath.sh@66 -- # sleep 6 00:24:29.873 02:41:09 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:29.873 02:41:09 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:29.873 02:41:09 -- host/multipath.sh@67 -- # active_port=4421 00:24:29.873 02:41:09 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:29.873 Attaching 4 probes... 00:24:29.873 @path[10.0.0.2, 4421]: 16623 00:24:29.873 @path[10.0.0.2, 4421]: 20930 00:24:29.873 @path[10.0.0.2, 4421]: 21029 00:24:29.873 @path[10.0.0.2, 4421]: 20814 00:24:29.873 @path[10.0.0.2, 4421]: 20773 00:24:29.873 02:41:09 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:29.873 02:41:09 -- host/multipath.sh@69 -- # sed -n 1p 00:24:29.873 02:41:09 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:29.873 02:41:09 -- host/multipath.sh@69 -- # port=4421 00:24:29.873 02:41:09 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:29.873 02:41:09 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:29.873 02:41:09 -- host/multipath.sh@72 -- # kill 88866 00:24:29.873 02:41:09 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:29.873 02:41:09 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:29.873 02:41:09 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:29.874 02:41:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:29.874 02:41:10 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:29.874 02:41:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:29.874 02:41:10 -- host/multipath.sh@65 -- # dtrace_pid=88998 00:24:29.874 02:41:10 -- host/multipath.sh@66 -- # sleep 6 00:24:36.440 02:41:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:36.440 02:41:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:36.440 02:41:16 -- host/multipath.sh@67 -- # active_port= 00:24:36.440 02:41:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.440 Attaching 4 probes... 00:24:36.440 00:24:36.440 00:24:36.440 00:24:36.440 00:24:36.441 00:24:36.441 02:41:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:36.441 02:41:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:36.441 02:41:16 -- host/multipath.sh@69 -- # sed -n 1p 00:24:36.441 02:41:16 -- host/multipath.sh@69 -- # port= 00:24:36.441 02:41:16 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:36.441 02:41:16 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:36.441 02:41:16 -- host/multipath.sh@72 -- # kill 88998 00:24:36.441 02:41:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:36.441 02:41:16 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:36.441 02:41:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.441 02:41:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:36.699 02:41:17 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:36.699 02:41:17 -- host/multipath.sh@65 -- # dtrace_pid=89128 00:24:36.699 02:41:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:36.699 02:41:17 -- host/multipath.sh@66 -- # sleep 6 00:24:43.266 02:41:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:43.266 02:41:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:43.266 02:41:23 -- host/multipath.sh@67 -- # active_port=4421 00:24:43.266 02:41:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.266 Attaching 4 probes... 00:24:43.266 @path[10.0.0.2, 4421]: 20420 00:24:43.266 @path[10.0.0.2, 4421]: 20545 00:24:43.266 @path[10.0.0.2, 4421]: 20699 00:24:43.266 @path[10.0.0.2, 4421]: 21006 00:24:43.266 @path[10.0.0.2, 4421]: 20663 00:24:43.266 02:41:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:43.266 02:41:23 -- host/multipath.sh@69 -- # sed -n 1p 00:24:43.266 02:41:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:43.266 02:41:23 -- host/multipath.sh@69 -- # port=4421 00:24:43.266 02:41:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:43.266 02:41:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:43.266 02:41:23 -- host/multipath.sh@72 -- # kill 89128 00:24:43.266 02:41:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:43.266 02:41:23 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:43.266 [2024-11-21 02:41:23.858387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858477] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858610] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858645] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.266 [2024-11-21 02:41:23.858657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.858994] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 [2024-11-21 02:41:23.859009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1366800 is same with the state(5) to be set 00:24:43.267 02:41:23 -- host/multipath.sh@101 -- # sleep 1 00:24:44.645 02:41:24 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:44.645 02:41:24 -- host/multipath.sh@65 -- # dtrace_pid=89264 00:24:44.645 02:41:24 -- host/multipath.sh@66 -- # sleep 6 00:24:44.645 02:41:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:51.243 02:41:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:51.243 02:41:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:51.243 02:41:31 -- host/multipath.sh@67 -- # active_port=4420 00:24:51.243 02:41:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:51.243 Attaching 4 probes... 00:24:51.243 @path[10.0.0.2, 4420]: 20934 00:24:51.243 @path[10.0.0.2, 4420]: 21304 00:24:51.243 @path[10.0.0.2, 4420]: 21267 00:24:51.243 @path[10.0.0.2, 4420]: 21487 00:24:51.243 @path[10.0.0.2, 4420]: 21468 00:24:51.243 02:41:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:51.243 02:41:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:51.243 02:41:31 -- host/multipath.sh@69 -- # sed -n 1p 00:24:51.243 02:41:31 -- host/multipath.sh@69 -- # port=4420 00:24:51.243 02:41:31 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:51.243 02:41:31 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:51.243 02:41:31 -- host/multipath.sh@72 -- # kill 89264 00:24:51.243 02:41:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:51.243 02:41:31 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:51.243 [2024-11-21 02:41:31.418261] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:51.243 02:41:31 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:51.243 02:41:31 -- host/multipath.sh@111 -- # sleep 6 00:24:57.826 02:41:37 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:57.826 02:41:37 -- host/multipath.sh@65 -- # dtrace_pid=89451 00:24:57.826 02:41:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88412 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:57.826 02:41:37 -- host/multipath.sh@66 -- # sleep 6 00:25:03.099 02:41:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:03.099 02:41:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:03.667 02:41:44 -- host/multipath.sh@67 -- # active_port=4421 00:25:03.667 02:41:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:03.667 Attaching 4 probes... 00:25:03.667 @path[10.0.0.2, 4421]: 19884 00:25:03.667 @path[10.0.0.2, 4421]: 19981 00:25:03.667 @path[10.0.0.2, 4421]: 20225 00:25:03.667 @path[10.0.0.2, 4421]: 20255 00:25:03.667 @path[10.0.0.2, 4421]: 20172 00:25:03.667 02:41:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:03.667 02:41:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:03.667 02:41:44 -- host/multipath.sh@69 -- # sed -n 1p 00:25:03.667 02:41:44 -- host/multipath.sh@69 -- # port=4421 00:25:03.667 02:41:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:03.667 02:41:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:03.667 02:41:44 -- host/multipath.sh@72 -- # kill 89451 00:25:03.667 02:41:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:03.667 02:41:44 -- host/multipath.sh@114 -- # killprocess 88506 00:25:03.667 02:41:44 -- common/autotest_common.sh@936 -- # '[' -z 88506 ']' 00:25:03.667 02:41:44 -- common/autotest_common.sh@940 -- # kill -0 88506 00:25:03.667 02:41:44 -- common/autotest_common.sh@941 -- # uname 00:25:03.667 02:41:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:03.667 02:41:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88506 00:25:03.667 02:41:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:03.667 02:41:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:03.667 killing process with pid 88506 00:25:03.667 02:41:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88506' 00:25:03.667 02:41:44 -- common/autotest_common.sh@955 -- # kill 88506 00:25:03.667 02:41:44 -- common/autotest_common.sh@960 -- # wait 88506 00:25:03.667 Connection closed with partial response: 00:25:03.667 00:25:03.667 00:25:03.936 02:41:44 -- host/multipath.sh@116 -- # wait 88506 00:25:03.936 02:41:44 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:03.936 [2024-11-21 02:40:46.572183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:03.936 [2024-11-21 02:40:46.572276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88506 ] 00:25:03.936 [2024-11-21 02:40:46.705201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.936 [2024-11-21 02:40:46.811122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.936 Running I/O for 90 seconds... 00:25:03.936 [2024-11-21 02:40:56.784495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.936 [2024-11-21 02:40:56.784548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.936 [2024-11-21 02:40:56.784595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.936 [2024-11-21 02:40:56.784615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.936 [2024-11-21 02:40:56.784635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.936 [2024-11-21 02:40:56.784649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.936 [2024-11-21 02:40:56.784667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.784680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.784710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.784771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.784812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.784847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.784883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.784952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.784986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.785011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.785027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.785048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.785063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.785939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.785975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.937 [2024-11-21 02:40:56.786802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.786972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.786993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.937 [2024-11-21 02:40:56.787313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.937 [2024-11-21 02:40:56.787331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.787727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.787916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.787952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.787973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.787988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.938 [2024-11-21 02:40:56.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.938 [2024-11-21 02:40:56.788681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.938 [2024-11-21 02:40:56.788695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.788727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.788791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.788840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.788879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.788915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.788951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.788972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.788987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.789069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.789521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:40:56.789583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:40:56.789632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:40:56.789646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:41:03.345417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:41:03.345448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:41:03.345479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.939 [2024-11-21 02:41:03.345686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.939 [2024-11-21 02:41:03.345716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.939 [2024-11-21 02:41:03.345733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.345762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.345815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.345831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.345853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.345868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.345888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.345903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.345925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.345940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.345962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.345991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.346591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.346639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.346653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.347791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.347822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.347867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.347888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.347920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.347936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.347962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.347977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.940 [2024-11-21 02:41:03.348463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.940 [2024-11-21 02:41:03.348485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.940 [2024-11-21 02:41:03.348499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.348569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.348605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.348983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.348998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.349936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.349978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.349994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.350024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.350040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.350081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.941 [2024-11-21 02:41:03.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.941 [2024-11-21 02:41:03.350130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.941 [2024-11-21 02:41:03.350145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:03.350404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:03.350500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:03.350541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:03.350634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:03.350660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:03.350674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.430655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:10.430708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.430825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:10.430851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.430876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.430892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.430913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:10.430928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.430949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.430964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.430986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:10.431001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:10.431036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.942 [2024-11-21 02:41:10.431946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.431969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.942 [2024-11-21 02:41:10.431984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:03.942 [2024-11-21 02:41:10.432007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.432920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.432982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.432998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.433179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.433285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.433322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.433359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.433395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.943 [2024-11-21 02:41:10.433504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.433966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.433982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.434007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.943 [2024-11-21 02:41:10.434023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:03.943 [2024-11-21 02:41:10.434049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.434680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.434850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.434876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.434893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.435165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.435225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.435264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.435465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.435543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.944 [2024-11-21 02:41:10.435642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.435971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.435987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.944 [2024-11-21 02:41:10.436015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.944 [2024-11-21 02:41:10.436031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:10.436118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:10.436261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:10.436300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:10.436456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:10.436559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.945 [2024-11-21 02:41:10.436573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.945 [2024-11-21 02:41:23.859674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.945 [2024-11-21 02:41:23.859704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.945 [2024-11-21 02:41:23.859758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.945 [2024-11-21 02:41:23.859803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9790 is same with the state(5) to be set 00:25:03.945 [2024-11-21 02:41:23.859908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.859932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.859972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.859987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.945 [2024-11-21 02:41:23.860541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.945 [2024-11-21 02:41:23.860554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.860685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.860709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.860732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.860788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.860816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.860887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.860977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.860992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.946 [2024-11-21 02:41:23.861510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.946 [2024-11-21 02:41:23.861524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.946 [2024-11-21 02:41:23.861535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.861560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.861584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.861976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.861991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.947 [2024-11-21 02:41:23.862678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.947 [2024-11-21 02:41:23.862691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.947 [2024-11-21 02:41:23.862703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.862728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.862784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.862828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.862858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.862886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.862914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.862941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.862969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.862983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.862997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.863052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.863080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.863108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.948 [2024-11-21 02:41:23.863188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.948 [2024-11-21 02:41:23.863575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12255b0 is same with the state(5) to be set 00:25:03.948 [2024-11-21 02:41:23.863601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:03.948 [2024-11-21 02:41:23.863610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:03.948 [2024-11-21 02:41:23.863619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:8 PRP1 0x0 PRP2 0x0 00:25:03.948 [2024-11-21 02:41:23.863631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.948 [2024-11-21 02:41:23.863686] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12255b0 was disconnected and freed. reset controller. 00:25:03.948 [2024-11-21 02:41:23.865002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.948 [2024-11-21 02:41:23.865040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9790 (9): Bad file descriptor 00:25:03.948 [2024-11-21 02:41:23.865174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-11-21 02:41:23.865225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.948 [2024-11-21 02:41:23.865244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c9790 with addr=10.0.0.2, port=4421 00:25:03.948 [2024-11-21 02:41:23.865258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9790 is same with the state(5) to be set 00:25:03.948 [2024-11-21 02:41:23.865280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9790 (9): Bad file descriptor 00:25:03.948 [2024-11-21 02:41:23.865301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.948 [2024-11-21 02:41:23.865313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.948 [2024-11-21 02:41:23.865326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.948 [2024-11-21 02:41:23.865348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.948 [2024-11-21 02:41:23.865360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.948 [2024-11-21 02:41:33.910578] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:03.948 Received shutdown signal, test time was about 55.347643 seconds 00:25:03.948 00:25:03.948 Latency(us) 00:25:03.948 [2024-11-21T02:41:44.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.948 [2024-11-21T02:41:44.595Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:03.949 Verification LBA range: start 0x0 length 0x4000 00:25:03.949 Nvme0n1 : 55.35 12162.97 47.51 0.00 0.00 10507.86 1139.43 7015926.69 00:25:03.949 [2024-11-21T02:41:44.596Z] =================================================================================================================== 00:25:03.949 [2024-11-21T02:41:44.596Z] Total : 12162.97 47.51 0.00 0.00 10507.86 1139.43 7015926.69 00:25:03.949 02:41:44 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.208 02:41:44 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:04.208 02:41:44 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:04.208 02:41:44 -- host/multipath.sh@125 -- # nvmftestfini 00:25:04.208 02:41:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:04.208 02:41:44 -- nvmf/common.sh@116 -- # sync 00:25:04.208 02:41:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:04.208 02:41:44 -- nvmf/common.sh@119 -- # set +e 00:25:04.208 02:41:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:04.208 02:41:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:04.208 rmmod nvme_tcp 00:25:04.208 rmmod nvme_fabrics 00:25:04.208 rmmod nvme_keyring 00:25:04.208 02:41:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:04.208 02:41:44 -- nvmf/common.sh@123 -- # set -e 00:25:04.208 02:41:44 -- nvmf/common.sh@124 -- # return 0 00:25:04.208 02:41:44 -- nvmf/common.sh@477 -- # '[' -n 88412 ']' 00:25:04.208 02:41:44 -- nvmf/common.sh@478 -- # killprocess 88412 00:25:04.208 02:41:44 -- common/autotest_common.sh@936 -- # '[' -z 88412 ']' 00:25:04.208 02:41:44 -- common/autotest_common.sh@940 -- # kill -0 88412 00:25:04.208 02:41:44 -- common/autotest_common.sh@941 -- # uname 00:25:04.208 02:41:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.208 02:41:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88412 00:25:04.208 02:41:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:04.208 02:41:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:04.208 killing process with pid 88412 00:25:04.208 02:41:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88412' 00:25:04.208 02:41:44 -- common/autotest_common.sh@955 -- # kill 88412 00:25:04.208 02:41:44 -- common/autotest_common.sh@960 -- # wait 88412 00:25:04.468 02:41:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:04.468 02:41:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:04.468 02:41:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:04.468 02:41:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.468 02:41:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:04.468 02:41:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.468 02:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.468 02:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.468 02:41:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:04.468 00:25:04.468 real 1m1.514s 00:25:04.468 user 2m52.913s 00:25:04.468 sys 0m14.198s 00:25:04.468 02:41:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:04.468 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:25:04.468 ************************************ 00:25:04.468 END TEST nvmf_multipath 00:25:04.468 ************************************ 00:25:04.468 02:41:45 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:04.468 02:41:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:04.468 02:41:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:04.468 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:25:04.727 ************************************ 00:25:04.727 START TEST nvmf_timeout 00:25:04.727 ************************************ 00:25:04.727 02:41:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:04.727 * Looking for test storage... 00:25:04.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:04.727 02:41:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:04.727 02:41:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:04.727 02:41:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:04.727 02:41:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:04.727 02:41:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:04.727 02:41:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:04.727 02:41:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:04.727 02:41:45 -- scripts/common.sh@335 -- # IFS=.-: 00:25:04.727 02:41:45 -- scripts/common.sh@335 -- # read -ra ver1 00:25:04.727 02:41:45 -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.727 02:41:45 -- scripts/common.sh@336 -- # read -ra ver2 00:25:04.727 02:41:45 -- scripts/common.sh@337 -- # local 'op=<' 00:25:04.727 02:41:45 -- scripts/common.sh@339 -- # ver1_l=2 00:25:04.727 02:41:45 -- scripts/common.sh@340 -- # ver2_l=1 00:25:04.727 02:41:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:04.727 02:41:45 -- scripts/common.sh@343 -- # case "$op" in 00:25:04.727 02:41:45 -- scripts/common.sh@344 -- # : 1 00:25:04.727 02:41:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:04.727 02:41:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.727 02:41:45 -- scripts/common.sh@364 -- # decimal 1 00:25:04.727 02:41:45 -- scripts/common.sh@352 -- # local d=1 00:25:04.727 02:41:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.727 02:41:45 -- scripts/common.sh@354 -- # echo 1 00:25:04.727 02:41:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:04.727 02:41:45 -- scripts/common.sh@365 -- # decimal 2 00:25:04.727 02:41:45 -- scripts/common.sh@352 -- # local d=2 00:25:04.727 02:41:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.727 02:41:45 -- scripts/common.sh@354 -- # echo 2 00:25:04.727 02:41:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:04.727 02:41:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:04.727 02:41:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:04.727 02:41:45 -- scripts/common.sh@367 -- # return 0 00:25:04.727 02:41:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.727 02:41:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:04.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.727 --rc genhtml_branch_coverage=1 00:25:04.727 --rc genhtml_function_coverage=1 00:25:04.727 --rc genhtml_legend=1 00:25:04.727 --rc geninfo_all_blocks=1 00:25:04.727 --rc geninfo_unexecuted_blocks=1 00:25:04.727 00:25:04.727 ' 00:25:04.727 02:41:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:04.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.727 --rc genhtml_branch_coverage=1 00:25:04.727 --rc genhtml_function_coverage=1 00:25:04.727 --rc genhtml_legend=1 00:25:04.727 --rc geninfo_all_blocks=1 00:25:04.727 --rc geninfo_unexecuted_blocks=1 00:25:04.727 00:25:04.727 ' 00:25:04.727 02:41:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:04.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.727 --rc genhtml_branch_coverage=1 00:25:04.727 --rc genhtml_function_coverage=1 00:25:04.727 --rc genhtml_legend=1 00:25:04.727 --rc geninfo_all_blocks=1 00:25:04.727 --rc geninfo_unexecuted_blocks=1 00:25:04.727 00:25:04.727 ' 00:25:04.727 02:41:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:04.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.727 --rc genhtml_branch_coverage=1 00:25:04.727 --rc genhtml_function_coverage=1 00:25:04.727 --rc genhtml_legend=1 00:25:04.727 --rc geninfo_all_blocks=1 00:25:04.727 --rc geninfo_unexecuted_blocks=1 00:25:04.727 00:25:04.727 ' 00:25:04.727 02:41:45 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:04.727 02:41:45 -- nvmf/common.sh@7 -- # uname -s 00:25:04.727 02:41:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.727 02:41:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.727 02:41:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.727 02:41:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.727 02:41:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.727 02:41:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.727 02:41:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.727 02:41:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.728 02:41:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.728 02:41:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.728 02:41:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:25:04.728 02:41:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:25:04.728 02:41:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.728 02:41:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.728 02:41:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:04.728 02:41:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:04.728 02:41:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.728 02:41:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.728 02:41:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.728 02:41:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.728 02:41:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.728 02:41:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.728 02:41:45 -- paths/export.sh@5 -- # export PATH 00:25:04.728 02:41:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.728 02:41:45 -- nvmf/common.sh@46 -- # : 0 00:25:04.728 02:41:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:04.728 02:41:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:04.728 02:41:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:04.728 02:41:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.728 02:41:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.728 02:41:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:04.728 02:41:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:04.728 02:41:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:04.728 02:41:45 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:04.728 02:41:45 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:04.728 02:41:45 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:04.728 02:41:45 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:04.728 02:41:45 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:04.728 02:41:45 -- host/timeout.sh@19 -- # nvmftestinit 00:25:04.728 02:41:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:04.728 02:41:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.728 02:41:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:04.728 02:41:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:04.728 02:41:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:04.728 02:41:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.728 02:41:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.728 02:41:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.728 02:41:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:04.728 02:41:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:04.728 02:41:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:04.728 02:41:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:04.728 02:41:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:04.728 02:41:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:04.728 02:41:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.728 02:41:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.728 02:41:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:04.728 02:41:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:04.728 02:41:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:04.728 02:41:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:04.728 02:41:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:04.728 02:41:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.728 02:41:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:04.728 02:41:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:04.728 02:41:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:04.728 02:41:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:04.728 02:41:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:04.728 02:41:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:04.728 Cannot find device "nvmf_tgt_br" 00:25:04.728 02:41:45 -- nvmf/common.sh@154 -- # true 00:25:04.728 02:41:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:04.987 Cannot find device "nvmf_tgt_br2" 00:25:04.987 02:41:45 -- nvmf/common.sh@155 -- # true 00:25:04.987 02:41:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:04.987 02:41:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:04.987 Cannot find device "nvmf_tgt_br" 00:25:04.987 02:41:45 -- nvmf/common.sh@157 -- # true 00:25:04.987 02:41:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:04.987 Cannot find device "nvmf_tgt_br2" 00:25:04.987 02:41:45 -- nvmf/common.sh@158 -- # true 00:25:04.987 02:41:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:04.987 02:41:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:04.987 02:41:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:04.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.987 02:41:45 -- nvmf/common.sh@161 -- # true 00:25:04.987 02:41:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:04.987 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:04.987 02:41:45 -- nvmf/common.sh@162 -- # true 00:25:04.987 02:41:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:04.987 02:41:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:04.987 02:41:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:04.987 02:41:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:04.987 02:41:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:04.987 02:41:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:04.987 02:41:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:04.987 02:41:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:04.987 02:41:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:04.987 02:41:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:04.987 02:41:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:04.987 02:41:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:04.987 02:41:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:04.987 02:41:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:04.987 02:41:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:04.987 02:41:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:04.987 02:41:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:04.987 02:41:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:04.987 02:41:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:04.987 02:41:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:04.987 02:41:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:04.987 02:41:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:04.987 02:41:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:05.246 02:41:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:05.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:25:05.246 00:25:05.246 --- 10.0.0.2 ping statistics --- 00:25:05.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.246 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:05.246 02:41:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:05.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:05.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:25:05.246 00:25:05.246 --- 10.0.0.3 ping statistics --- 00:25:05.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.246 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:05.246 02:41:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:05.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:05.246 00:25:05.246 --- 10.0.0.1 ping statistics --- 00:25:05.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.246 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:05.246 02:41:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.246 02:41:45 -- nvmf/common.sh@421 -- # return 0 00:25:05.246 02:41:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:05.246 02:41:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.246 02:41:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:05.246 02:41:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:05.246 02:41:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.246 02:41:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:05.246 02:41:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:05.246 02:41:45 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:05.246 02:41:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:05.246 02:41:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:05.246 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:25:05.246 02:41:45 -- nvmf/common.sh@469 -- # nvmfpid=89783 00:25:05.246 02:41:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:05.246 02:41:45 -- nvmf/common.sh@470 -- # waitforlisten 89783 00:25:05.246 02:41:45 -- common/autotest_common.sh@829 -- # '[' -z 89783 ']' 00:25:05.246 02:41:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.246 02:41:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.246 02:41:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.246 02:41:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.246 02:41:45 -- common/autotest_common.sh@10 -- # set +x 00:25:05.246 [2024-11-21 02:41:45.729523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:05.246 [2024-11-21 02:41:45.729612] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.246 [2024-11-21 02:41:45.866270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:05.506 [2024-11-21 02:41:45.943182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:05.506 [2024-11-21 02:41:45.943587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.506 [2024-11-21 02:41:45.943795] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.506 [2024-11-21 02:41:45.943939] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.506 [2024-11-21 02:41:45.944207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.506 [2024-11-21 02:41:45.944217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.442 02:41:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.442 02:41:46 -- common/autotest_common.sh@862 -- # return 0 00:25:06.442 02:41:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:06.442 02:41:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.442 02:41:46 -- common/autotest_common.sh@10 -- # set +x 00:25:06.442 02:41:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.442 02:41:46 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.442 02:41:46 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:06.442 [2024-11-21 02:41:46.980990] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.442 02:41:46 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:06.701 Malloc0 00:25:06.701 02:41:47 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.960 02:41:47 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.219 02:41:47 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.477 [2024-11-21 02:41:47.967386] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.477 02:41:47 -- host/timeout.sh@32 -- # bdevperf_pid=89874 00:25:07.477 02:41:47 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:07.477 02:41:47 -- host/timeout.sh@34 -- # waitforlisten 89874 /var/tmp/bdevperf.sock 00:25:07.477 02:41:47 -- common/autotest_common.sh@829 -- # '[' -z 89874 ']' 00:25:07.477 02:41:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.477 02:41:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.477 02:41:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.477 02:41:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.477 02:41:47 -- common/autotest_common.sh@10 -- # set +x 00:25:07.477 [2024-11-21 02:41:48.043618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:07.477 [2024-11-21 02:41:48.043705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89874 ] 00:25:07.736 [2024-11-21 02:41:48.182901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.736 [2024-11-21 02:41:48.290504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.675 02:41:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.675 02:41:48 -- common/autotest_common.sh@862 -- # return 0 00:25:08.675 02:41:48 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:08.675 02:41:49 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:08.936 NVMe0n1 00:25:08.936 02:41:49 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.936 02:41:49 -- host/timeout.sh@51 -- # rpc_pid=89922 00:25:08.936 02:41:49 -- host/timeout.sh@53 -- # sleep 1 00:25:09.195 Running I/O for 10 seconds... 00:25:10.135 02:41:50 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.135 [2024-11-21 02:41:50.723953] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724124] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724222] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724259] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724297] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724313] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724321] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724345] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724388] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ba40 is same with the state(5) to be set 00:25:10.135 [2024-11-21 02:41:50.724697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.135 [2024-11-21 02:41:50.724770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.135 [2024-11-21 02:41:50.724800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.135 [2024-11-21 02:41:50.724810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.135 [2024-11-21 02:41:50.724819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.135 [2024-11-21 02:41:50.724827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.135 [2024-11-21 02:41:50.724837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.135 [2024-11-21 02:41:50.724845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.135 [2024-11-21 02:41:50.724854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.135 [2024-11-21 02:41:50.724862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.135 [2024-11-21 02:41:50.724871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.135 [2024-11-21 02:41:50.724878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.135 [2024-11-21 02:41:50.724887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.724989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.724999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.136 [2024-11-21 02:41:50.725479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.136 [2024-11-21 02:41:50.725487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.136 [2024-11-21 02:41:50.725494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.137 [2024-11-21 02:41:50.725971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.725987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.726013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.726029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.726045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.726061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.726077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.137 [2024-11-21 02:41:50.726125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.137 [2024-11-21 02:41:50.726132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.138 [2024-11-21 02:41:50.726667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.138 [2024-11-21 02:41:50.726795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.138 [2024-11-21 02:41:50.726802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.139 [2024-11-21 02:41:50.726943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.726952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ba050 is same with the state(5) to be set 00:25:10.139 [2024-11-21 02:41:50.726962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:10.139 [2024-11-21 02:41:50.726968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:10.139 [2024-11-21 02:41:50.726974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126376 len:8 PRP1 0x0 PRP2 0x0 00:25:10.139 [2024-11-21 02:41:50.726981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.727040] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ba050 was disconnected and freed. reset controller. 00:25:10.139 [2024-11-21 02:41:50.727154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.139 [2024-11-21 02:41:50.727168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.727177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.139 [2024-11-21 02:41:50.727196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.727204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.139 [2024-11-21 02:41:50.727211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.727220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.139 [2024-11-21 02:41:50.727227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.139 [2024-11-21 02:41:50.727234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844dc0 is same with the state(5) to be set 00:25:10.139 [2024-11-21 02:41:50.727414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.139 [2024-11-21 02:41:50.727435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x844dc0 (9): Bad file descriptor 00:25:10.139 [2024-11-21 02:41:50.727533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-11-21 02:41:50.727574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.139 [2024-11-21 02:41:50.727588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x844dc0 with addr=10.0.0.2, port=4420 00:25:10.139 [2024-11-21 02:41:50.727597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844dc0 is same with the state(5) to be set 00:25:10.139 [2024-11-21 02:41:50.727612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x844dc0 (9): Bad file descriptor 00:25:10.139 [2024-11-21 02:41:50.727625] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:10.139 [2024-11-21 02:41:50.727633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:10.139 [2024-11-21 02:41:50.727642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:10.139 [2024-11-21 02:41:50.738637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.139 [2024-11-21 02:41:50.738710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:10.139 02:41:50 -- host/timeout.sh@56 -- # sleep 2 00:25:12.675 [2024-11-21 02:41:52.738849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.675 [2024-11-21 02:41:52.738913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.675 [2024-11-21 02:41:52.738929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x844dc0 with addr=10.0.0.2, port=4420 00:25:12.675 [2024-11-21 02:41:52.738939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844dc0 is same with the state(5) to be set 00:25:12.675 [2024-11-21 02:41:52.738955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x844dc0 (9): Bad file descriptor 00:25:12.675 [2024-11-21 02:41:52.738969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.675 [2024-11-21 02:41:52.738977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.675 [2024-11-21 02:41:52.738984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.675 [2024-11-21 02:41:52.739001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.675 [2024-11-21 02:41:52.739009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.675 02:41:52 -- host/timeout.sh@57 -- # get_controller 00:25:12.675 02:41:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.675 02:41:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:12.675 02:41:53 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:12.675 02:41:53 -- host/timeout.sh@58 -- # get_bdev 00:25:12.675 02:41:53 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:12.675 02:41:53 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:12.675 02:41:53 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:12.675 02:41:53 -- host/timeout.sh@61 -- # sleep 5 00:25:14.579 [2024-11-21 02:41:54.739067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.579 [2024-11-21 02:41:54.739145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.579 [2024-11-21 02:41:54.739162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x844dc0 with addr=10.0.0.2, port=4420 00:25:14.579 [2024-11-21 02:41:54.739172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x844dc0 is same with the state(5) to be set 00:25:14.579 [2024-11-21 02:41:54.739188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x844dc0 (9): Bad file descriptor 00:25:14.579 [2024-11-21 02:41:54.739201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.579 [2024-11-21 02:41:54.739210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.579 [2024-11-21 02:41:54.739217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.579 [2024-11-21 02:41:54.739233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.579 [2024-11-21 02:41:54.739242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.482 [2024-11-21 02:41:56.739256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.482 [2024-11-21 02:41:56.739299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.482 [2024-11-21 02:41:56.739308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.482 [2024-11-21 02:41:56.739325] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:16.482 [2024-11-21 02:41:56.739340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.417 00:25:17.417 Latency(us) 00:25:17.417 [2024-11-21T02:41:58.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.417 [2024-11-21T02:41:58.064Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:17.417 Verification LBA range: start 0x0 length 0x4000 00:25:17.417 NVMe0n1 : 8.13 1933.24 7.55 15.75 0.00 65583.94 2576.76 7015926.69 00:25:17.417 [2024-11-21T02:41:58.064Z] =================================================================================================================== 00:25:17.417 [2024-11-21T02:41:58.064Z] Total : 1933.24 7.55 15.75 0.00 65583.94 2576.76 7015926.69 00:25:17.417 0 00:25:17.676 02:41:58 -- host/timeout.sh@62 -- # get_controller 00:25:17.676 02:41:58 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:17.676 02:41:58 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:17.935 02:41:58 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:17.935 02:41:58 -- host/timeout.sh@63 -- # get_bdev 00:25:17.935 02:41:58 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:17.935 02:41:58 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:18.195 02:41:58 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:18.195 02:41:58 -- host/timeout.sh@65 -- # wait 89922 00:25:18.195 02:41:58 -- host/timeout.sh@67 -- # killprocess 89874 00:25:18.195 02:41:58 -- common/autotest_common.sh@936 -- # '[' -z 89874 ']' 00:25:18.195 02:41:58 -- common/autotest_common.sh@940 -- # kill -0 89874 00:25:18.195 02:41:58 -- common/autotest_common.sh@941 -- # uname 00:25:18.195 02:41:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:18.195 02:41:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89874 00:25:18.453 killing process with pid 89874 00:25:18.454 Received shutdown signal, test time was about 9.244848 seconds 00:25:18.454 00:25:18.454 Latency(us) 00:25:18.454 [2024-11-21T02:41:59.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.454 [2024-11-21T02:41:59.101Z] =================================================================================================================== 00:25:18.454 [2024-11-21T02:41:59.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.454 02:41:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:18.454 02:41:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:18.454 02:41:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89874' 00:25:18.454 02:41:58 -- common/autotest_common.sh@955 -- # kill 89874 00:25:18.454 02:41:58 -- common/autotest_common.sh@960 -- # wait 89874 00:25:18.713 02:41:59 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.713 [2024-11-21 02:41:59.332505] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.972 02:41:59 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:18.972 02:41:59 -- host/timeout.sh@74 -- # bdevperf_pid=90081 00:25:18.972 02:41:59 -- host/timeout.sh@76 -- # waitforlisten 90081 /var/tmp/bdevperf.sock 00:25:18.972 02:41:59 -- common/autotest_common.sh@829 -- # '[' -z 90081 ']' 00:25:18.972 02:41:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.972 02:41:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.972 02:41:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.972 02:41:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.972 02:41:59 -- common/autotest_common.sh@10 -- # set +x 00:25:18.972 [2024-11-21 02:41:59.394542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:18.972 [2024-11-21 02:41:59.394639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90081 ] 00:25:18.972 [2024-11-21 02:41:59.526524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.972 [2024-11-21 02:41:59.603534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.908 02:42:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.908 02:42:00 -- common/autotest_common.sh@862 -- # return 0 00:25:19.908 02:42:00 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:20.166 02:42:00 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:20.425 NVMe0n1 00:25:20.425 02:42:00 -- host/timeout.sh@84 -- # rpc_pid=90123 00:25:20.425 02:42:00 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:20.425 02:42:00 -- host/timeout.sh@86 -- # sleep 1 00:25:20.425 Running I/O for 10 seconds... 00:25:21.361 02:42:01 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.622 [2024-11-21 02:42:02.173400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.622 [2024-11-21 02:42:02.173563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173706] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173847] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173935] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.173985] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e38b70 is same with the state(5) to be set 00:25:21.623 [2024-11-21 02:42:02.174295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.623 [2024-11-21 02:42:02.174356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.623 [2024-11-21 02:42:02.174413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.623 [2024-11-21 02:42:02.174465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.623 [2024-11-21 02:42:02.174484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.623 [2024-11-21 02:42:02.174506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.623 [2024-11-21 02:42:02.174525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.623 [2024-11-21 02:42:02.174557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.623 [2024-11-21 02:42:02.174567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.623 [2024-11-21 02:42:02.174576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.624 [2024-11-21 02:42:02.174972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.174988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.174997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.624 [2024-11-21 02:42:02.175189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.624 [2024-11-21 02:42:02.175198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.625 [2024-11-21 02:42:02.175502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.625 [2024-11-21 02:42:02.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.625 [2024-11-21 02:42:02.175885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.175894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.175903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.175911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.175921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.175929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.175939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.175946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.175956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.175964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.175973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.175981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.175990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.175998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.626 [2024-11-21 02:42:02.176473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.626 [2024-11-21 02:42:02.176482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.626 [2024-11-21 02:42:02.176489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.627 [2024-11-21 02:42:02.176732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.627 [2024-11-21 02:42:02.176878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42050 is same with the state(5) to be set 00:25:21.627 [2024-11-21 02:42:02.176899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.627 [2024-11-21 02:42:02.176906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.627 [2024-11-21 02:42:02.176913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128904 len:8 PRP1 0x0 PRP2 0x0 00:25:21.627 [2024-11-21 02:42:02.176921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.627 [2024-11-21 02:42:02.176975] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc42050 was disconnected and freed. reset controller. 00:25:21.627 [2024-11-21 02:42:02.177181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.627 [2024-11-21 02:42:02.177255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:21.627 [2024-11-21 02:42:02.177352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.627 [2024-11-21 02:42:02.177422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.627 [2024-11-21 02:42:02.177437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbccdc0 with addr=10.0.0.2, port=4420 00:25:21.627 [2024-11-21 02:42:02.177447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:21.627 [2024-11-21 02:42:02.177463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:21.627 [2024-11-21 02:42:02.177478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.627 [2024-11-21 02:42:02.177487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.627 [2024-11-21 02:42:02.177497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.627 [2024-11-21 02:42:02.177514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.627 [2024-11-21 02:42:02.177525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.627 02:42:02 -- host/timeout.sh@90 -- # sleep 1 00:25:22.564 [2024-11-21 02:42:03.177603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.564 [2024-11-21 02:42:03.177681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.564 [2024-11-21 02:42:03.177696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbccdc0 with addr=10.0.0.2, port=4420 00:25:22.564 [2024-11-21 02:42:03.177706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:22.564 [2024-11-21 02:42:03.177724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:22.564 [2024-11-21 02:42:03.177748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.564 [2024-11-21 02:42:03.177762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.564 [2024-11-21 02:42:03.177770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.564 [2024-11-21 02:42:03.177790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.564 [2024-11-21 02:42:03.177801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.564 02:42:03 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.823 [2024-11-21 02:42:03.395273] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.823 02:42:03 -- host/timeout.sh@92 -- # wait 90123 00:25:23.758 [2024-11-21 02:42:04.197202] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:31.962 00:25:31.962 Latency(us) 00:25:31.962 [2024-11-21T02:42:12.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.962 [2024-11-21T02:42:12.609Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:31.962 Verification LBA range: start 0x0 length 0x4000 00:25:31.962 NVMe0n1 : 10.01 10811.77 42.23 0.00 0.00 11823.78 1370.30 3019898.88 00:25:31.962 [2024-11-21T02:42:12.609Z] =================================================================================================================== 00:25:31.962 [2024-11-21T02:42:12.609Z] Total : 10811.77 42.23 0.00 0.00 11823.78 1370.30 3019898.88 00:25:31.962 0 00:25:31.962 02:42:11 -- host/timeout.sh@97 -- # rpc_pid=90244 00:25:31.962 02:42:11 -- host/timeout.sh@98 -- # sleep 1 00:25:31.962 02:42:11 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:31.962 Running I/O for 10 seconds... 00:25:31.962 02:42:12 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.962 [2024-11-21 02:42:12.327542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327937] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327945] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.327991] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328000] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328062] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328071] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328088] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.962 [2024-11-21 02:42:12.328105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328262] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c95c70 is same with the state(5) to be set 00:25:31.963 [2024-11-21 02:42:12.328718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.328985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.328996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.963 [2024-11-21 02:42:12.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.963 [2024-11-21 02:42:12.329316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.963 [2024-11-21 02:42:12.329325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.963 [2024-11-21 02:42:12.329332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.329964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.329991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.964 [2024-11-21 02:42:12.329999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.330008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.330016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.330026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.330034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.964 [2024-11-21 02:42:12.330043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.964 [2024-11-21 02:42:12.330051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.965 [2024-11-21 02:42:12.330715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.965 [2024-11-21 02:42:12.330773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.965 [2024-11-21 02:42:12.330784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-11-21 02:42:12.330808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-11-21 02:42:12.330843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-11-21 02:42:12.330860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-11-21 02:42:12.330916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-11-21 02:42:12.330932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.330987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.330997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.331005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.331023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.331040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.331056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.966 [2024-11-21 02:42:12.331072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3df90 is same with the state(5) to be set 00:25:31.966 [2024-11-21 02:42:12.331090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.966 [2024-11-21 02:42:12.331097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.966 [2024-11-21 02:42:12.331104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7720 len:8 PRP1 0x0 PRP2 0x0 00:25:31.966 [2024-11-21 02:42:12.331111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331148] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc3df90 was disconnected and freed. reset controller. 00:25:31.966 [2024-11-21 02:42:12.331213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.966 [2024-11-21 02:42:12.331228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.966 [2024-11-21 02:42:12.331250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.966 [2024-11-21 02:42:12.331267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.966 [2024-11-21 02:42:12.331283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.966 [2024-11-21 02:42:12.331291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:31.966 [2024-11-21 02:42:12.331468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.966 [2024-11-21 02:42:12.331490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:31.966 [2024-11-21 02:42:12.331564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.966 [2024-11-21 02:42:12.331610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.966 [2024-11-21 02:42:12.331624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbccdc0 with addr=10.0.0.2, port=4420 00:25:31.966 [2024-11-21 02:42:12.331639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:31.966 [2024-11-21 02:42:12.331655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:31.966 [2024-11-21 02:42:12.331669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.966 [2024-11-21 02:42:12.331678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.966 [2024-11-21 02:42:12.331688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.966 [2024-11-21 02:42:12.342829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.966 [2024-11-21 02:42:12.342900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.966 02:42:12 -- host/timeout.sh@101 -- # sleep 3 00:25:32.903 [2024-11-21 02:42:13.342971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.903 [2024-11-21 02:42:13.343042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.903 [2024-11-21 02:42:13.343058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbccdc0 with addr=10.0.0.2, port=4420 00:25:32.903 [2024-11-21 02:42:13.343068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:32.903 [2024-11-21 02:42:13.343086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:32.903 [2024-11-21 02:42:13.343109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:32.903 [2024-11-21 02:42:13.343120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:32.903 [2024-11-21 02:42:13.343129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.903 [2024-11-21 02:42:13.343145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.903 [2024-11-21 02:42:13.343155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.838 [2024-11-21 02:42:14.343215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.838 [2024-11-21 02:42:14.343273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.838 [2024-11-21 02:42:14.343287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbccdc0 with addr=10.0.0.2, port=4420 00:25:33.838 [2024-11-21 02:42:14.343297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:33.838 [2024-11-21 02:42:14.343313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:33.838 [2024-11-21 02:42:14.343327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:33.838 [2024-11-21 02:42:14.343335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:33.838 [2024-11-21 02:42:14.343342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.838 [2024-11-21 02:42:14.343359] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.838 [2024-11-21 02:42:14.343368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.775 [2024-11-21 02:42:15.343483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.775 [2024-11-21 02:42:15.343558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.775 [2024-11-21 02:42:15.343573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbccdc0 with addr=10.0.0.2, port=4420 00:25:34.775 [2024-11-21 02:42:15.343583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbccdc0 is same with the state(5) to be set 00:25:34.775 [2024-11-21 02:42:15.343709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbccdc0 (9): Bad file descriptor 00:25:34.775 [2024-11-21 02:42:15.343842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.775 [2024-11-21 02:42:15.343856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.775 [2024-11-21 02:42:15.343865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.775 [2024-11-21 02:42:15.345663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:34.775 [2024-11-21 02:42:15.345686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.775 02:42:15 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.033 [2024-11-21 02:42:15.587559] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.033 02:42:15 -- host/timeout.sh@103 -- # wait 90244 00:25:35.969 [2024-11-21 02:42:16.367486] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.243 00:25:41.243 Latency(us) 00:25:41.243 [2024-11-21T02:42:21.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.243 [2024-11-21T02:42:21.890Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:41.243 Verification LBA range: start 0x0 length 0x4000 00:25:41.243 NVMe0n1 : 10.01 9335.91 36.47 7588.69 0.00 7551.94 562.27 3019898.88 00:25:41.243 [2024-11-21T02:42:21.890Z] =================================================================================================================== 00:25:41.243 [2024-11-21T02:42:21.890Z] Total : 9335.91 36.47 7588.69 0.00 7551.94 0.00 3019898.88 00:25:41.243 0 00:25:41.243 02:42:21 -- host/timeout.sh@105 -- # killprocess 90081 00:25:41.243 02:42:21 -- common/autotest_common.sh@936 -- # '[' -z 90081 ']' 00:25:41.243 02:42:21 -- common/autotest_common.sh@940 -- # kill -0 90081 00:25:41.243 02:42:21 -- common/autotest_common.sh@941 -- # uname 00:25:41.243 02:42:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:41.243 02:42:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90081 00:25:41.243 killing process with pid 90081 00:25:41.243 Received shutdown signal, test time was about 10.000000 seconds 00:25:41.243 00:25:41.243 Latency(us) 00:25:41.243 [2024-11-21T02:42:21.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.243 [2024-11-21T02:42:21.890Z] =================================================================================================================== 00:25:41.243 [2024-11-21T02:42:21.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.243 02:42:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:41.243 02:42:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:41.243 02:42:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90081' 00:25:41.243 02:42:21 -- common/autotest_common.sh@955 -- # kill 90081 00:25:41.243 02:42:21 -- common/autotest_common.sh@960 -- # wait 90081 00:25:41.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.243 02:42:21 -- host/timeout.sh@110 -- # bdevperf_pid=90366 00:25:41.243 02:42:21 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:41.243 02:42:21 -- host/timeout.sh@112 -- # waitforlisten 90366 /var/tmp/bdevperf.sock 00:25:41.243 02:42:21 -- common/autotest_common.sh@829 -- # '[' -z 90366 ']' 00:25:41.243 02:42:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.243 02:42:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.243 02:42:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.243 02:42:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.243 02:42:21 -- common/autotest_common.sh@10 -- # set +x 00:25:41.243 [2024-11-21 02:42:21.617086] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.243 [2024-11-21 02:42:21.617214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90366 ] 00:25:41.243 [2024-11-21 02:42:21.755048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.243 [2024-11-21 02:42:21.830136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.179 02:42:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.179 02:42:22 -- common/autotest_common.sh@862 -- # return 0 00:25:42.179 02:42:22 -- host/timeout.sh@116 -- # dtrace_pid=90394 00:25:42.179 02:42:22 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 90366 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:42.179 02:42:22 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:42.437 02:42:22 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:42.696 NVMe0n1 00:25:42.696 02:42:23 -- host/timeout.sh@124 -- # rpc_pid=90452 00:25:42.696 02:42:23 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:42.696 02:42:23 -- host/timeout.sh@125 -- # sleep 1 00:25:42.696 Running I/O for 10 seconds... 00:25:43.632 02:42:24 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.893 [2024-11-21 02:42:24.473938] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.474566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.474689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.474805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.474878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.474947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475661] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.475978] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476074] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476355] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476562] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.476979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477225] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.893 [2024-11-21 02:42:24.477636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.477700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.477810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.477923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.478007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.478032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.478042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.478052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99400 is same with the state(5) to be set 00:25:43.894 [2024-11-21 02:42:24.478369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.478990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.478999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.479007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.479017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.479025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.479034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.894 [2024-11-21 02:42:24.479042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.894 [2024-11-21 02:42:24.479050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.895 [2024-11-21 02:42:24.479691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.895 [2024-11-21 02:42:24.479699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.479986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.479994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.896 [2024-11-21 02:42:24.480375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.896 [2024-11-21 02:42:24.480385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.897 [2024-11-21 02:42:24.480587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c050 is same with the state(5) to be set 00:25:43.897 [2024-11-21 02:42:24.480605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:43.897 [2024-11-21 02:42:24.480612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:43.897 [2024-11-21 02:42:24.480619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125520 len:8 PRP1 0x0 PRP2 0x0 00:25:43.897 [2024-11-21 02:42:24.480626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.897 [2024-11-21 02:42:24.480672] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x240c050 was disconnected and freed. reset controller. 00:25:43.897 [2024-11-21 02:42:24.480903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.897 [2024-11-21 02:42:24.480982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396dc0 (9): Bad file descriptor 00:25:43.897 [2024-11-21 02:42:24.481063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.897 [2024-11-21 02:42:24.481111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.897 [2024-11-21 02:42:24.481125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396dc0 with addr=10.0.0.2, port=4420 00:25:43.897 [2024-11-21 02:42:24.481134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396dc0 is same with the state(5) to be set 00:25:43.897 [2024-11-21 02:42:24.481149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396dc0 (9): Bad file descriptor 00:25:43.897 [2024-11-21 02:42:24.481162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:43.897 [2024-11-21 02:42:24.481171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:43.897 [2024-11-21 02:42:24.481179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:43.897 [2024-11-21 02:42:24.481201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.897 [2024-11-21 02:42:24.481211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:43.897 02:42:24 -- host/timeout.sh@128 -- # wait 90452 00:25:46.428 [2024-11-21 02:42:26.481283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.428 [2024-11-21 02:42:26.481360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.428 [2024-11-21 02:42:26.481376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396dc0 with addr=10.0.0.2, port=4420 00:25:46.428 [2024-11-21 02:42:26.481386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396dc0 is same with the state(5) to be set 00:25:46.428 [2024-11-21 02:42:26.481402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396dc0 (9): Bad file descriptor 00:25:46.428 [2024-11-21 02:42:26.481424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:46.428 [2024-11-21 02:42:26.481435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:46.428 [2024-11-21 02:42:26.481443] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:46.428 [2024-11-21 02:42:26.481459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.428 [2024-11-21 02:42:26.481468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.331 [2024-11-21 02:42:28.481543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-21 02:42:28.481614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-21 02:42:28.481630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2396dc0 with addr=10.0.0.2, port=4420 00:25:48.331 [2024-11-21 02:42:28.481639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2396dc0 is same with the state(5) to be set 00:25:48.331 [2024-11-21 02:42:28.481656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2396dc0 (9): Bad file descriptor 00:25:48.331 [2024-11-21 02:42:28.481670] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:48.331 [2024-11-21 02:42:28.481680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:48.331 [2024-11-21 02:42:28.481688] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.331 [2024-11-21 02:42:28.481704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:48.331 [2024-11-21 02:42:28.481713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.234 [2024-11-21 02:42:30.481834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.234 [2024-11-21 02:42:30.481905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.234 [2024-11-21 02:42:30.481916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.234 [2024-11-21 02:42:30.481925] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:50.234 [2024-11-21 02:42:30.481951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.171 00:25:51.171 Latency(us) 00:25:51.171 [2024-11-21T02:42:31.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.171 [2024-11-21T02:42:31.818Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:51.171 NVMe0n1 : 8.17 3304.81 12.91 15.67 0.00 38487.89 2591.65 7046430.72 00:25:51.171 [2024-11-21T02:42:31.818Z] =================================================================================================================== 00:25:51.171 [2024-11-21T02:42:31.818Z] Total : 3304.81 12.91 15.67 0.00 38487.89 2591.65 7046430.72 00:25:51.171 0 00:25:51.171 02:42:31 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:51.171 Attaching 5 probes... 00:25:51.171 1448.520393: reset bdev controller NVMe0 00:25:51.171 1448.644210: reconnect bdev controller NVMe0 00:25:51.171 3448.854015: reconnect delay bdev controller NVMe0 00:25:51.171 3448.866638: reconnect bdev controller NVMe0 00:25:51.171 5449.110265: reconnect delay bdev controller NVMe0 00:25:51.171 5449.122246: reconnect bdev controller NVMe0 00:25:51.171 7449.432943: reconnect delay bdev controller NVMe0 00:25:51.171 7449.453444: reconnect bdev controller NVMe0 00:25:51.171 02:42:31 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:51.171 02:42:31 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:51.171 02:42:31 -- host/timeout.sh@136 -- # kill 90394 00:25:51.171 02:42:31 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:51.171 02:42:31 -- host/timeout.sh@139 -- # killprocess 90366 00:25:51.171 02:42:31 -- common/autotest_common.sh@936 -- # '[' -z 90366 ']' 00:25:51.171 02:42:31 -- common/autotest_common.sh@940 -- # kill -0 90366 00:25:51.171 02:42:31 -- common/autotest_common.sh@941 -- # uname 00:25:51.171 02:42:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:51.171 02:42:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90366 00:25:51.171 killing process with pid 90366 00:25:51.171 Received shutdown signal, test time was about 8.226936 seconds 00:25:51.172 00:25:51.172 Latency(us) 00:25:51.172 [2024-11-21T02:42:31.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.172 [2024-11-21T02:42:31.819Z] =================================================================================================================== 00:25:51.172 [2024-11-21T02:42:31.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.172 02:42:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:51.172 02:42:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:51.172 02:42:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90366' 00:25:51.172 02:42:31 -- common/autotest_common.sh@955 -- # kill 90366 00:25:51.172 02:42:31 -- common/autotest_common.sh@960 -- # wait 90366 00:25:51.431 02:42:31 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.691 02:42:32 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:51.691 02:42:32 -- host/timeout.sh@145 -- # nvmftestfini 00:25:51.691 02:42:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:51.691 02:42:32 -- nvmf/common.sh@116 -- # sync 00:25:51.691 02:42:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:51.691 02:42:32 -- nvmf/common.sh@119 -- # set +e 00:25:51.691 02:42:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:51.691 02:42:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:51.691 rmmod nvme_tcp 00:25:51.691 rmmod nvme_fabrics 00:25:51.691 rmmod nvme_keyring 00:25:51.691 02:42:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:51.691 02:42:32 -- nvmf/common.sh@123 -- # set -e 00:25:51.691 02:42:32 -- nvmf/common.sh@124 -- # return 0 00:25:51.691 02:42:32 -- nvmf/common.sh@477 -- # '[' -n 89783 ']' 00:25:51.691 02:42:32 -- nvmf/common.sh@478 -- # killprocess 89783 00:25:51.691 02:42:32 -- common/autotest_common.sh@936 -- # '[' -z 89783 ']' 00:25:51.691 02:42:32 -- common/autotest_common.sh@940 -- # kill -0 89783 00:25:51.691 02:42:32 -- common/autotest_common.sh@941 -- # uname 00:25:51.691 02:42:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:51.691 02:42:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89783 00:25:51.691 killing process with pid 89783 00:25:51.691 02:42:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:51.691 02:42:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:51.691 02:42:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89783' 00:25:51.691 02:42:32 -- common/autotest_common.sh@955 -- # kill 89783 00:25:51.691 02:42:32 -- common/autotest_common.sh@960 -- # wait 89783 00:25:51.949 02:42:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:51.949 02:42:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:51.950 02:42:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:51.950 02:42:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.950 02:42:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:51.950 02:42:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.950 02:42:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.950 02:42:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.950 02:42:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:51.950 00:25:51.950 real 0m47.401s 00:25:51.950 user 2m18.528s 00:25:51.950 sys 0m5.453s 00:25:51.950 02:42:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:51.950 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:25:51.950 ************************************ 00:25:51.950 END TEST nvmf_timeout 00:25:51.950 ************************************ 00:25:51.950 02:42:32 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:51.950 02:42:32 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:51.950 02:42:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:51.950 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.243 02:42:32 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:52.243 00:25:52.243 real 18m46.625s 00:25:52.243 user 60m3.236s 00:25:52.243 sys 3m52.947s 00:25:52.243 02:42:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:52.243 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.243 ************************************ 00:25:52.243 END TEST nvmf_tcp 00:25:52.243 ************************************ 00:25:52.243 02:42:32 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:52.243 02:42:32 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:52.243 02:42:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:52.243 02:42:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:52.243 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.243 ************************************ 00:25:52.243 START TEST spdkcli_nvmf_tcp 00:25:52.243 ************************************ 00:25:52.243 02:42:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:52.243 * Looking for test storage... 00:25:52.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:52.243 02:42:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:52.243 02:42:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:52.243 02:42:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:52.243 02:42:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:52.243 02:42:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:52.243 02:42:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:52.243 02:42:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:52.243 02:42:32 -- scripts/common.sh@335 -- # IFS=.-: 00:25:52.243 02:42:32 -- scripts/common.sh@335 -- # read -ra ver1 00:25:52.243 02:42:32 -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.243 02:42:32 -- scripts/common.sh@336 -- # read -ra ver2 00:25:52.243 02:42:32 -- scripts/common.sh@337 -- # local 'op=<' 00:25:52.243 02:42:32 -- scripts/common.sh@339 -- # ver1_l=2 00:25:52.243 02:42:32 -- scripts/common.sh@340 -- # ver2_l=1 00:25:52.243 02:42:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:52.243 02:42:32 -- scripts/common.sh@343 -- # case "$op" in 00:25:52.243 02:42:32 -- scripts/common.sh@344 -- # : 1 00:25:52.243 02:42:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:52.243 02:42:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.243 02:42:32 -- scripts/common.sh@364 -- # decimal 1 00:25:52.243 02:42:32 -- scripts/common.sh@352 -- # local d=1 00:25:52.243 02:42:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.243 02:42:32 -- scripts/common.sh@354 -- # echo 1 00:25:52.243 02:42:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:52.243 02:42:32 -- scripts/common.sh@365 -- # decimal 2 00:25:52.243 02:42:32 -- scripts/common.sh@352 -- # local d=2 00:25:52.243 02:42:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.243 02:42:32 -- scripts/common.sh@354 -- # echo 2 00:25:52.243 02:42:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:52.243 02:42:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:52.243 02:42:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:52.243 02:42:32 -- scripts/common.sh@367 -- # return 0 00:25:52.243 02:42:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.243 02:42:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.243 --rc genhtml_branch_coverage=1 00:25:52.243 --rc genhtml_function_coverage=1 00:25:52.243 --rc genhtml_legend=1 00:25:52.243 --rc geninfo_all_blocks=1 00:25:52.243 --rc geninfo_unexecuted_blocks=1 00:25:52.243 00:25:52.243 ' 00:25:52.243 02:42:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.243 --rc genhtml_branch_coverage=1 00:25:52.243 --rc genhtml_function_coverage=1 00:25:52.243 --rc genhtml_legend=1 00:25:52.243 --rc geninfo_all_blocks=1 00:25:52.243 --rc geninfo_unexecuted_blocks=1 00:25:52.243 00:25:52.243 ' 00:25:52.243 02:42:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.243 --rc genhtml_branch_coverage=1 00:25:52.243 --rc genhtml_function_coverage=1 00:25:52.243 --rc genhtml_legend=1 00:25:52.243 --rc geninfo_all_blocks=1 00:25:52.243 --rc geninfo_unexecuted_blocks=1 00:25:52.243 00:25:52.243 ' 00:25:52.243 02:42:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:52.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.243 --rc genhtml_branch_coverage=1 00:25:52.243 --rc genhtml_function_coverage=1 00:25:52.243 --rc genhtml_legend=1 00:25:52.243 --rc geninfo_all_blocks=1 00:25:52.243 --rc geninfo_unexecuted_blocks=1 00:25:52.243 00:25:52.243 ' 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:52.243 02:42:32 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:52.243 02:42:32 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:52.243 02:42:32 -- nvmf/common.sh@7 -- # uname -s 00:25:52.243 02:42:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.243 02:42:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.243 02:42:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.243 02:42:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.243 02:42:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.243 02:42:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.243 02:42:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.243 02:42:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.243 02:42:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.243 02:42:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.243 02:42:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:25:52.243 02:42:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:25:52.243 02:42:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.243 02:42:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.243 02:42:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:52.243 02:42:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.243 02:42:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.243 02:42:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.243 02:42:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.243 02:42:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.243 02:42:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.243 02:42:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.243 02:42:32 -- paths/export.sh@5 -- # export PATH 00:25:52.243 02:42:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.243 02:42:32 -- nvmf/common.sh@46 -- # : 0 00:25:52.243 02:42:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:52.243 02:42:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:52.243 02:42:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:52.243 02:42:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.243 02:42:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.243 02:42:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:52.243 02:42:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:52.243 02:42:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:52.243 02:42:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:52.243 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.243 02:42:32 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:52.243 02:42:32 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=90677 00:25:52.243 02:42:32 -- spdkcli/common.sh@34 -- # waitforlisten 90677 00:25:52.243 02:42:32 -- common/autotest_common.sh@829 -- # '[' -z 90677 ']' 00:25:52.243 02:42:32 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:52.244 02:42:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.244 02:42:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.244 02:42:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.244 02:42:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.244 02:42:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.503 [2024-11-21 02:42:32.904912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:52.503 [2024-11-21 02:42:32.905018] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90677 ] 00:25:52.503 [2024-11-21 02:42:33.040399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:52.503 [2024-11-21 02:42:33.116807] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:52.503 [2024-11-21 02:42:33.117144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.503 [2024-11-21 02:42:33.117171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.494 02:42:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.495 02:42:33 -- common/autotest_common.sh@862 -- # return 0 00:25:53.495 02:42:33 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:53.495 02:42:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:53.495 02:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:53.495 02:42:33 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:53.495 02:42:33 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:53.495 02:42:33 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:53.495 02:42:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:53.495 02:42:33 -- common/autotest_common.sh@10 -- # set +x 00:25:53.495 02:42:33 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:53.495 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:53.495 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:53.495 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:53.495 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:53.495 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:53.495 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:53.495 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:53.495 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:53.495 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:53.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:53.495 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:53.495 ' 00:25:53.753 [2024-11-21 02:42:34.387379] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:56.284 [2024-11-21 02:42:36.645860] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.662 [2024-11-21 02:42:37.927731] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:00.197 [2024-11-21 02:42:40.319052] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:02.102 [2024-11-21 02:42:42.377858] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:03.481 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:03.481 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:03.481 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:03.481 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:03.481 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:03.481 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:03.481 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:03.481 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:03.481 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:03.481 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:03.481 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:03.481 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:03.481 02:42:44 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:03.481 02:42:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:03.481 02:42:44 -- common/autotest_common.sh@10 -- # set +x 00:26:03.481 02:42:44 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:03.481 02:42:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:03.481 02:42:44 -- common/autotest_common.sh@10 -- # set +x 00:26:03.481 02:42:44 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:03.481 02:42:44 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:26:04.049 02:42:44 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:04.049 02:42:44 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:04.049 02:42:44 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:04.049 02:42:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:04.049 02:42:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 02:42:44 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:04.049 02:42:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.049 02:42:44 -- common/autotest_common.sh@10 -- # set +x 00:26:04.049 02:42:44 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:04.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:04.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:04.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:04.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:04.049 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:04.049 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:04.049 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:04.049 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:04.049 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:04.049 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:04.049 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:04.049 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:04.049 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:04.049 ' 00:26:10.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:10.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:10.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:10.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:10.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:10.666 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:10.666 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:10.666 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:10.666 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:10.666 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:10.666 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:10.666 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:10.666 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:10.666 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:10.666 02:42:50 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:10.666 02:42:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:10.666 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.666 02:42:50 -- spdkcli/nvmf.sh@90 -- # killprocess 90677 00:26:10.666 02:42:50 -- common/autotest_common.sh@936 -- # '[' -z 90677 ']' 00:26:10.666 02:42:50 -- common/autotest_common.sh@940 -- # kill -0 90677 00:26:10.666 02:42:50 -- common/autotest_common.sh@941 -- # uname 00:26:10.666 02:42:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:10.666 02:42:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90677 00:26:10.666 02:42:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:10.666 02:42:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:10.666 killing process with pid 90677 00:26:10.666 02:42:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90677' 00:26:10.666 02:42:50 -- common/autotest_common.sh@955 -- # kill 90677 00:26:10.666 [2024-11-21 02:42:50.247487] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:10.666 02:42:50 -- common/autotest_common.sh@960 -- # wait 90677 00:26:10.666 02:42:50 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:10.666 02:42:50 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:10.666 02:42:50 -- spdkcli/common.sh@13 -- # '[' -n 90677 ']' 00:26:10.666 02:42:50 -- spdkcli/common.sh@14 -- # killprocess 90677 00:26:10.666 02:42:50 -- common/autotest_common.sh@936 -- # '[' -z 90677 ']' 00:26:10.666 02:42:50 -- common/autotest_common.sh@940 -- # kill -0 90677 00:26:10.666 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (90677) - No such process 00:26:10.666 Process with pid 90677 is not found 00:26:10.666 02:42:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 90677 is not found' 00:26:10.666 02:42:50 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:10.666 02:42:50 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:10.666 02:42:50 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:10.666 00:26:10.666 real 0m17.831s 00:26:10.666 user 0m38.717s 00:26:10.666 sys 0m0.859s 00:26:10.666 02:42:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:10.666 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.666 ************************************ 00:26:10.666 END TEST spdkcli_nvmf_tcp 00:26:10.666 ************************************ 00:26:10.667 02:42:50 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:10.667 02:42:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:10.667 02:42:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:10.667 02:42:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.667 ************************************ 00:26:10.667 START TEST nvmf_identify_passthru 00:26:10.667 ************************************ 00:26:10.667 02:42:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:10.667 * Looking for test storage... 00:26:10.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:10.667 02:42:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:10.667 02:42:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:10.667 02:42:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:10.667 02:42:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:10.667 02:42:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:10.667 02:42:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:10.667 02:42:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:10.667 02:42:50 -- scripts/common.sh@335 -- # IFS=.-: 00:26:10.667 02:42:50 -- scripts/common.sh@335 -- # read -ra ver1 00:26:10.667 02:42:50 -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.667 02:42:50 -- scripts/common.sh@336 -- # read -ra ver2 00:26:10.667 02:42:50 -- scripts/common.sh@337 -- # local 'op=<' 00:26:10.667 02:42:50 -- scripts/common.sh@339 -- # ver1_l=2 00:26:10.667 02:42:50 -- scripts/common.sh@340 -- # ver2_l=1 00:26:10.667 02:42:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:10.667 02:42:50 -- scripts/common.sh@343 -- # case "$op" in 00:26:10.667 02:42:50 -- scripts/common.sh@344 -- # : 1 00:26:10.667 02:42:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:10.667 02:42:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.667 02:42:50 -- scripts/common.sh@364 -- # decimal 1 00:26:10.667 02:42:50 -- scripts/common.sh@352 -- # local d=1 00:26:10.667 02:42:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.667 02:42:50 -- scripts/common.sh@354 -- # echo 1 00:26:10.667 02:42:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:10.667 02:42:50 -- scripts/common.sh@365 -- # decimal 2 00:26:10.667 02:42:50 -- scripts/common.sh@352 -- # local d=2 00:26:10.667 02:42:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.667 02:42:50 -- scripts/common.sh@354 -- # echo 2 00:26:10.667 02:42:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:10.667 02:42:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:10.667 02:42:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:10.667 02:42:50 -- scripts/common.sh@367 -- # return 0 00:26:10.667 02:42:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.667 02:42:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.667 --rc genhtml_branch_coverage=1 00:26:10.667 --rc genhtml_function_coverage=1 00:26:10.667 --rc genhtml_legend=1 00:26:10.667 --rc geninfo_all_blocks=1 00:26:10.667 --rc geninfo_unexecuted_blocks=1 00:26:10.667 00:26:10.667 ' 00:26:10.667 02:42:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.667 --rc genhtml_branch_coverage=1 00:26:10.667 --rc genhtml_function_coverage=1 00:26:10.667 --rc genhtml_legend=1 00:26:10.667 --rc geninfo_all_blocks=1 00:26:10.667 --rc geninfo_unexecuted_blocks=1 00:26:10.667 00:26:10.667 ' 00:26:10.667 02:42:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.667 --rc genhtml_branch_coverage=1 00:26:10.667 --rc genhtml_function_coverage=1 00:26:10.667 --rc genhtml_legend=1 00:26:10.667 --rc geninfo_all_blocks=1 00:26:10.667 --rc geninfo_unexecuted_blocks=1 00:26:10.667 00:26:10.667 ' 00:26:10.667 02:42:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.667 --rc genhtml_branch_coverage=1 00:26:10.667 --rc genhtml_function_coverage=1 00:26:10.667 --rc genhtml_legend=1 00:26:10.667 --rc geninfo_all_blocks=1 00:26:10.667 --rc geninfo_unexecuted_blocks=1 00:26:10.667 00:26:10.667 ' 00:26:10.667 02:42:50 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.667 02:42:50 -- nvmf/common.sh@7 -- # uname -s 00:26:10.667 02:42:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.667 02:42:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.667 02:42:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.667 02:42:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.667 02:42:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.667 02:42:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.667 02:42:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.667 02:42:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.667 02:42:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.667 02:42:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.667 02:42:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:26:10.667 02:42:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:26:10.667 02:42:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.667 02:42:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.667 02:42:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.667 02:42:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.667 02:42:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.667 02:42:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.667 02:42:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.667 02:42:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- paths/export.sh@5 -- # export PATH 00:26:10.667 02:42:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- nvmf/common.sh@46 -- # : 0 00:26:10.667 02:42:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:10.667 02:42:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:10.667 02:42:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:10.667 02:42:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.667 02:42:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.667 02:42:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:10.667 02:42:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:10.667 02:42:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:10.667 02:42:50 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.667 02:42:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.667 02:42:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.667 02:42:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.667 02:42:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- paths/export.sh@5 -- # export PATH 00:26:10.667 02:42:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.667 02:42:50 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:10.667 02:42:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:10.667 02:42:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.667 02:42:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:10.667 02:42:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:10.667 02:42:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:10.668 02:42:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.668 02:42:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:10.668 02:42:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.668 02:42:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:10.668 02:42:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:10.668 02:42:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:10.668 02:42:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:10.668 02:42:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:10.668 02:42:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:10.668 02:42:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.668 02:42:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.668 02:42:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:10.668 02:42:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:10.668 02:42:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:10.668 02:42:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:10.668 02:42:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:10.668 02:42:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.668 02:42:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:10.668 02:42:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:10.668 02:42:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:10.668 02:42:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:10.668 02:42:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:10.668 02:42:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:10.668 Cannot find device "nvmf_tgt_br" 00:26:10.668 02:42:50 -- nvmf/common.sh@154 -- # true 00:26:10.668 02:42:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.668 Cannot find device "nvmf_tgt_br2" 00:26:10.668 02:42:50 -- nvmf/common.sh@155 -- # true 00:26:10.668 02:42:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:10.668 02:42:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:10.668 Cannot find device "nvmf_tgt_br" 00:26:10.668 02:42:50 -- nvmf/common.sh@157 -- # true 00:26:10.668 02:42:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:10.668 Cannot find device "nvmf_tgt_br2" 00:26:10.668 02:42:50 -- nvmf/common.sh@158 -- # true 00:26:10.668 02:42:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:10.668 02:42:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:10.668 02:42:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.668 02:42:50 -- nvmf/common.sh@161 -- # true 00:26:10.668 02:42:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.668 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.668 02:42:50 -- nvmf/common.sh@162 -- # true 00:26:10.668 02:42:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:10.668 02:42:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:10.668 02:42:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:10.668 02:42:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:10.668 02:42:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:10.668 02:42:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:10.668 02:42:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:10.668 02:42:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:10.668 02:42:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:10.668 02:42:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:10.668 02:42:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:10.668 02:42:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:10.668 02:42:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:10.668 02:42:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:10.668 02:42:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:10.668 02:42:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:10.668 02:42:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:10.668 02:42:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:10.668 02:42:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:10.668 02:42:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:10.668 02:42:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:10.668 02:42:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:10.668 02:42:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:10.668 02:42:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:10.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:26:10.668 00:26:10.668 --- 10.0.0.2 ping statistics --- 00:26:10.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.668 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:10.668 02:42:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:10.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:10.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:26:10.668 00:26:10.668 --- 10.0.0.3 ping statistics --- 00:26:10.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.668 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:10.668 02:42:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:10.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:26:10.668 00:26:10.668 --- 10.0.0.1 ping statistics --- 00:26:10.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.668 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:26:10.668 02:42:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.668 02:42:51 -- nvmf/common.sh@421 -- # return 0 00:26:10.668 02:42:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:10.668 02:42:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.668 02:42:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:10.668 02:42:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:10.668 02:42:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.668 02:42:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:10.668 02:42:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:10.668 02:42:51 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:10.668 02:42:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:10.668 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:26:10.668 02:42:51 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:10.668 02:42:51 -- common/autotest_common.sh@1519 -- # bdfs=() 00:26:10.668 02:42:51 -- common/autotest_common.sh@1519 -- # local bdfs 00:26:10.668 02:42:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:10.668 02:42:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:10.668 02:42:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:10.668 02:42:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:10.668 02:42:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:10.668 02:42:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:10.668 02:42:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:10.668 02:42:51 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:26:10.668 02:42:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:10.668 02:42:51 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:26:10.668 02:42:51 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:26:10.668 02:42:51 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:26:10.668 02:42:51 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:10.668 02:42:51 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:10.668 02:42:51 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:10.927 02:42:51 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:10.927 02:42:51 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:10.927 02:42:51 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:10.927 02:42:51 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:10.927 02:42:51 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:26:10.927 02:42:51 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:10.927 02:42:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:10.927 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:26:10.927 02:42:51 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:10.927 02:42:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:10.927 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:26:10.927 02:42:51 -- target/identify_passthru.sh@31 -- # nvmfpid=91186 00:26:10.927 02:42:51 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:10.927 02:42:51 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:10.928 02:42:51 -- target/identify_passthru.sh@35 -- # waitforlisten 91186 00:26:10.928 02:42:51 -- common/autotest_common.sh@829 -- # '[' -z 91186 ']' 00:26:10.928 02:42:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.928 02:42:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:10.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.928 02:42:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.928 02:42:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:10.928 02:42:51 -- common/autotest_common.sh@10 -- # set +x 00:26:11.186 [2024-11-21 02:42:51.621507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:11.186 [2024-11-21 02:42:51.621610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.186 [2024-11-21 02:42:51.764151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.445 [2024-11-21 02:42:51.881277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.445 [2024-11-21 02:42:51.881871] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.445 [2024-11-21 02:42:51.882062] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.445 [2024-11-21 02:42:51.882291] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.445 [2024-11-21 02:42:51.882603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.445 [2024-11-21 02:42:51.882767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.445 [2024-11-21 02:42:51.882850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.445 [2024-11-21 02:42:51.882856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.011 02:42:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.011 02:42:52 -- common/autotest_common.sh@862 -- # return 0 00:26:12.011 02:42:52 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:12.011 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.011 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.011 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.011 02:42:52 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:12.011 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.011 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.270 [2024-11-21 02:42:52.770701] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:12.270 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.270 02:42:52 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.270 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.270 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.270 [2024-11-21 02:42:52.780988] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.270 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.270 02:42:52 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:12.270 02:42:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.270 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.270 02:42:52 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:26:12.270 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.270 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.270 Nvme0n1 00:26:12.270 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.270 02:42:52 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:12.270 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.270 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.528 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.528 02:42:52 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:12.528 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.528 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.528 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.528 02:42:52 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.528 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.528 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.528 [2024-11-21 02:42:52.933398] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.528 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.528 02:42:52 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:12.528 02:42:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.528 02:42:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.528 [2024-11-21 02:42:52.941108] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:12.528 [ 00:26:12.528 { 00:26:12.528 "allow_any_host": true, 00:26:12.528 "hosts": [], 00:26:12.528 "listen_addresses": [], 00:26:12.528 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:12.528 "subtype": "Discovery" 00:26:12.528 }, 00:26:12.528 { 00:26:12.528 "allow_any_host": true, 00:26:12.528 "hosts": [], 00:26:12.528 "listen_addresses": [ 00:26:12.528 { 00:26:12.528 "adrfam": "IPv4", 00:26:12.528 "traddr": "10.0.0.2", 00:26:12.528 "transport": "TCP", 00:26:12.528 "trsvcid": "4420", 00:26:12.528 "trtype": "TCP" 00:26:12.528 } 00:26:12.528 ], 00:26:12.528 "max_cntlid": 65519, 00:26:12.528 "max_namespaces": 1, 00:26:12.528 "min_cntlid": 1, 00:26:12.528 "model_number": "SPDK bdev Controller", 00:26:12.528 "namespaces": [ 00:26:12.528 { 00:26:12.528 "bdev_name": "Nvme0n1", 00:26:12.528 "name": "Nvme0n1", 00:26:12.528 "nguid": "45D98BE269E1426C9AAC9A6D3BB92C92", 00:26:12.528 "nsid": 1, 00:26:12.528 "uuid": "45d98be2-69e1-426c-9aac-9a6d3bb92c92" 00:26:12.528 } 00:26:12.528 ], 00:26:12.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.528 "serial_number": "SPDK00000000000001", 00:26:12.528 "subtype": "NVMe" 00:26:12.528 } 00:26:12.528 ] 00:26:12.528 02:42:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.528 02:42:52 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:12.528 02:42:52 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:12.528 02:42:52 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:12.528 02:42:53 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:26:12.528 02:42:53 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:12.528 02:42:53 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:12.528 02:42:53 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:12.787 02:42:53 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:26:12.787 02:42:53 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:26:12.787 02:42:53 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:26:12.787 02:42:53 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.787 02:42:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.787 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:26:12.787 02:42:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.787 02:42:53 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:12.787 02:42:53 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:12.787 02:42:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:12.787 02:42:53 -- nvmf/common.sh@116 -- # sync 00:26:13.046 02:42:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:13.046 02:42:53 -- nvmf/common.sh@119 -- # set +e 00:26:13.046 02:42:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:13.046 02:42:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:13.046 rmmod nvme_tcp 00:26:13.046 rmmod nvme_fabrics 00:26:13.046 rmmod nvme_keyring 00:26:13.046 02:42:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:13.046 02:42:53 -- nvmf/common.sh@123 -- # set -e 00:26:13.046 02:42:53 -- nvmf/common.sh@124 -- # return 0 00:26:13.046 02:42:53 -- nvmf/common.sh@477 -- # '[' -n 91186 ']' 00:26:13.046 02:42:53 -- nvmf/common.sh@478 -- # killprocess 91186 00:26:13.046 02:42:53 -- common/autotest_common.sh@936 -- # '[' -z 91186 ']' 00:26:13.046 02:42:53 -- common/autotest_common.sh@940 -- # kill -0 91186 00:26:13.046 02:42:53 -- common/autotest_common.sh@941 -- # uname 00:26:13.046 02:42:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:13.046 02:42:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91186 00:26:13.046 killing process with pid 91186 00:26:13.046 02:42:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:13.046 02:42:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:13.046 02:42:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91186' 00:26:13.046 02:42:53 -- common/autotest_common.sh@955 -- # kill 91186 00:26:13.046 [2024-11-21 02:42:53.545935] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:13.046 02:42:53 -- common/autotest_common.sh@960 -- # wait 91186 00:26:13.305 02:42:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:13.305 02:42:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:13.305 02:42:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:13.305 02:42:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.305 02:42:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:13.305 02:42:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.305 02:42:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:13.305 02:42:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.305 02:42:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:13.305 00:26:13.305 real 0m3.306s 00:26:13.305 user 0m8.055s 00:26:13.305 sys 0m0.885s 00:26:13.305 02:42:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:13.305 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.305 ************************************ 00:26:13.305 END TEST nvmf_identify_passthru 00:26:13.305 ************************************ 00:26:13.305 02:42:53 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:13.305 02:42:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:13.305 02:42:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.305 02:42:53 -- common/autotest_common.sh@10 -- # set +x 00:26:13.305 ************************************ 00:26:13.305 START TEST nvmf_dif 00:26:13.305 ************************************ 00:26:13.305 02:42:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:13.564 * Looking for test storage... 00:26:13.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:13.564 02:42:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:13.564 02:42:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:13.564 02:42:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:13.564 02:42:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:13.564 02:42:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:13.564 02:42:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:13.564 02:42:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:13.564 02:42:54 -- scripts/common.sh@335 -- # IFS=.-: 00:26:13.564 02:42:54 -- scripts/common.sh@335 -- # read -ra ver1 00:26:13.564 02:42:54 -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.564 02:42:54 -- scripts/common.sh@336 -- # read -ra ver2 00:26:13.564 02:42:54 -- scripts/common.sh@337 -- # local 'op=<' 00:26:13.564 02:42:54 -- scripts/common.sh@339 -- # ver1_l=2 00:26:13.564 02:42:54 -- scripts/common.sh@340 -- # ver2_l=1 00:26:13.564 02:42:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:13.564 02:42:54 -- scripts/common.sh@343 -- # case "$op" in 00:26:13.564 02:42:54 -- scripts/common.sh@344 -- # : 1 00:26:13.564 02:42:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:13.564 02:42:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.564 02:42:54 -- scripts/common.sh@364 -- # decimal 1 00:26:13.564 02:42:54 -- scripts/common.sh@352 -- # local d=1 00:26:13.564 02:42:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.564 02:42:54 -- scripts/common.sh@354 -- # echo 1 00:26:13.564 02:42:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:13.564 02:42:54 -- scripts/common.sh@365 -- # decimal 2 00:26:13.564 02:42:54 -- scripts/common.sh@352 -- # local d=2 00:26:13.564 02:42:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.564 02:42:54 -- scripts/common.sh@354 -- # echo 2 00:26:13.564 02:42:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:13.564 02:42:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:13.564 02:42:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:13.564 02:42:54 -- scripts/common.sh@367 -- # return 0 00:26:13.564 02:42:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.564 02:42:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.564 --rc genhtml_branch_coverage=1 00:26:13.564 --rc genhtml_function_coverage=1 00:26:13.564 --rc genhtml_legend=1 00:26:13.564 --rc geninfo_all_blocks=1 00:26:13.564 --rc geninfo_unexecuted_blocks=1 00:26:13.564 00:26:13.564 ' 00:26:13.564 02:42:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.564 --rc genhtml_branch_coverage=1 00:26:13.564 --rc genhtml_function_coverage=1 00:26:13.564 --rc genhtml_legend=1 00:26:13.564 --rc geninfo_all_blocks=1 00:26:13.564 --rc geninfo_unexecuted_blocks=1 00:26:13.564 00:26:13.564 ' 00:26:13.564 02:42:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.564 --rc genhtml_branch_coverage=1 00:26:13.564 --rc genhtml_function_coverage=1 00:26:13.564 --rc genhtml_legend=1 00:26:13.564 --rc geninfo_all_blocks=1 00:26:13.564 --rc geninfo_unexecuted_blocks=1 00:26:13.564 00:26:13.564 ' 00:26:13.564 02:42:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:13.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.564 --rc genhtml_branch_coverage=1 00:26:13.564 --rc genhtml_function_coverage=1 00:26:13.564 --rc genhtml_legend=1 00:26:13.564 --rc geninfo_all_blocks=1 00:26:13.564 --rc geninfo_unexecuted_blocks=1 00:26:13.564 00:26:13.564 ' 00:26:13.564 02:42:54 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:13.564 02:42:54 -- nvmf/common.sh@7 -- # uname -s 00:26:13.564 02:42:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.564 02:42:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.564 02:42:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.564 02:42:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.564 02:42:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.564 02:42:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.564 02:42:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.564 02:42:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.564 02:42:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.564 02:42:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.564 02:42:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:26:13.564 02:42:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:26:13.564 02:42:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.564 02:42:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.564 02:42:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:13.564 02:42:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:13.564 02:42:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.564 02:42:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.564 02:42:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.565 02:42:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.565 02:42:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.565 02:42:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.565 02:42:54 -- paths/export.sh@5 -- # export PATH 00:26:13.565 02:42:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.565 02:42:54 -- nvmf/common.sh@46 -- # : 0 00:26:13.565 02:42:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:13.565 02:42:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:13.565 02:42:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:13.565 02:42:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.565 02:42:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.565 02:42:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:13.565 02:42:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:13.565 02:42:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:13.565 02:42:54 -- target/dif.sh@15 -- # NULL_META=16 00:26:13.565 02:42:54 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:13.565 02:42:54 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:13.565 02:42:54 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:13.565 02:42:54 -- target/dif.sh@135 -- # nvmftestinit 00:26:13.565 02:42:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:13.565 02:42:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.565 02:42:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:13.565 02:42:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:13.565 02:42:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:13.565 02:42:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.565 02:42:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:13.565 02:42:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.565 02:42:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:13.565 02:42:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:13.565 02:42:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:13.565 02:42:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:13.565 02:42:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:13.565 02:42:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:13.565 02:42:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.565 02:42:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.565 02:42:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:13.565 02:42:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:13.565 02:42:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:13.565 02:42:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:13.565 02:42:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:13.565 02:42:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.565 02:42:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:13.565 02:42:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:13.565 02:42:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:13.565 02:42:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:13.565 02:42:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:13.565 02:42:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:13.565 Cannot find device "nvmf_tgt_br" 00:26:13.565 02:42:54 -- nvmf/common.sh@154 -- # true 00:26:13.565 02:42:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:13.565 Cannot find device "nvmf_tgt_br2" 00:26:13.565 02:42:54 -- nvmf/common.sh@155 -- # true 00:26:13.565 02:42:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:13.565 02:42:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:13.565 Cannot find device "nvmf_tgt_br" 00:26:13.565 02:42:54 -- nvmf/common.sh@157 -- # true 00:26:13.565 02:42:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:13.565 Cannot find device "nvmf_tgt_br2" 00:26:13.565 02:42:54 -- nvmf/common.sh@158 -- # true 00:26:13.565 02:42:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:13.565 02:42:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:13.824 02:42:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:13.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.824 02:42:54 -- nvmf/common.sh@161 -- # true 00:26:13.824 02:42:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:13.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.824 02:42:54 -- nvmf/common.sh@162 -- # true 00:26:13.824 02:42:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:13.824 02:42:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:13.824 02:42:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:13.824 02:42:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:13.824 02:42:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:13.824 02:42:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:13.824 02:42:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:13.824 02:42:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:13.824 02:42:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:13.824 02:42:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:13.824 02:42:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:13.824 02:42:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:13.824 02:42:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:13.824 02:42:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:13.824 02:42:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:13.824 02:42:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:13.824 02:42:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:13.824 02:42:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:13.824 02:42:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:13.824 02:42:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:13.824 02:42:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:13.824 02:42:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:13.824 02:42:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:13.824 02:42:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:13.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:26:13.824 00:26:13.824 --- 10.0.0.2 ping statistics --- 00:26:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.824 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:26:13.824 02:42:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:13.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:13.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:26:13.824 00:26:13.824 --- 10.0.0.3 ping statistics --- 00:26:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.824 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:13.824 02:42:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:26:13.824 00:26:13.824 --- 10.0.0.1 ping statistics --- 00:26:13.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.824 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:26:13.824 02:42:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.824 02:42:54 -- nvmf/common.sh@421 -- # return 0 00:26:13.824 02:42:54 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:13.824 02:42:54 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:14.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:14.392 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:14.392 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:14.392 02:42:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.392 02:42:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:14.392 02:42:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:14.392 02:42:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.392 02:42:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:14.392 02:42:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:14.392 02:42:54 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:14.392 02:42:54 -- target/dif.sh@137 -- # nvmfappstart 00:26:14.392 02:42:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:14.392 02:42:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:14.392 02:42:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.392 02:42:54 -- nvmf/common.sh@469 -- # nvmfpid=91539 00:26:14.392 02:42:54 -- nvmf/common.sh@470 -- # waitforlisten 91539 00:26:14.392 02:42:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:14.392 02:42:54 -- common/autotest_common.sh@829 -- # '[' -z 91539 ']' 00:26:14.392 02:42:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.392 02:42:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.392 02:42:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.392 02:42:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.392 02:42:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.392 [2024-11-21 02:42:54.956584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:14.392 [2024-11-21 02:42:54.956672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.651 [2024-11-21 02:42:55.085246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.651 [2024-11-21 02:42:55.173381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:14.651 [2024-11-21 02:42:55.173508] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.651 [2024-11-21 02:42:55.173521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.651 [2024-11-21 02:42:55.173529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.651 [2024-11-21 02:42:55.173560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.586 02:42:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:15.586 02:42:55 -- common/autotest_common.sh@862 -- # return 0 00:26:15.586 02:42:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:15.586 02:42:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:15.586 02:42:55 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 02:42:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.586 02:42:56 -- target/dif.sh@139 -- # create_transport 00:26:15.586 02:42:56 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:15.586 02:42:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.586 02:42:56 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 [2024-11-21 02:42:56.045530] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.586 02:42:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.586 02:42:56 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:15.586 02:42:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:15.586 02:42:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:15.586 02:42:56 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 ************************************ 00:26:15.586 START TEST fio_dif_1_default 00:26:15.586 ************************************ 00:26:15.586 02:42:56 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:26:15.586 02:42:56 -- target/dif.sh@86 -- # create_subsystems 0 00:26:15.586 02:42:56 -- target/dif.sh@28 -- # local sub 00:26:15.586 02:42:56 -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.586 02:42:56 -- target/dif.sh@31 -- # create_subsystem 0 00:26:15.586 02:42:56 -- target/dif.sh@18 -- # local sub_id=0 00:26:15.586 02:42:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:15.586 02:42:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.586 02:42:56 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 bdev_null0 00:26:15.586 02:42:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.586 02:42:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:15.586 02:42:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.586 02:42:56 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 02:42:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.586 02:42:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:15.586 02:42:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.586 02:42:56 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 02:42:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.586 02:42:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.586 02:42:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.586 02:42:56 -- common/autotest_common.sh@10 -- # set +x 00:26:15.586 [2024-11-21 02:42:56.093647] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.586 02:42:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.586 02:42:56 -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:15.586 02:42:56 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:15.586 02:42:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:15.587 02:42:56 -- nvmf/common.sh@520 -- # config=() 00:26:15.587 02:42:56 -- nvmf/common.sh@520 -- # local subsystem config 00:26:15.587 02:42:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.587 02:42:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.587 02:42:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.587 { 00:26:15.587 "params": { 00:26:15.587 "name": "Nvme$subsystem", 00:26:15.587 "trtype": "$TEST_TRANSPORT", 00:26:15.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.587 "adrfam": "ipv4", 00:26:15.587 "trsvcid": "$NVMF_PORT", 00:26:15.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.587 "hdgst": ${hdgst:-false}, 00:26:15.587 "ddgst": ${ddgst:-false} 00:26:15.587 }, 00:26:15.587 "method": "bdev_nvme_attach_controller" 00:26:15.587 } 00:26:15.587 EOF 00:26:15.587 )") 00:26:15.587 02:42:56 -- target/dif.sh@82 -- # gen_fio_conf 00:26:15.587 02:42:56 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.587 02:42:56 -- target/dif.sh@54 -- # local file 00:26:15.587 02:42:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:15.587 02:42:56 -- target/dif.sh@56 -- # cat 00:26:15.587 02:42:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.587 02:42:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:15.587 02:42:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.587 02:42:56 -- common/autotest_common.sh@1330 -- # shift 00:26:15.587 02:42:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:15.587 02:42:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.587 02:42:56 -- nvmf/common.sh@542 -- # cat 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.587 02:42:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:15.587 02:42:56 -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:15.587 02:42:56 -- nvmf/common.sh@544 -- # jq . 00:26:15.587 02:42:56 -- nvmf/common.sh@545 -- # IFS=, 00:26:15.587 02:42:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:15.587 "params": { 00:26:15.587 "name": "Nvme0", 00:26:15.587 "trtype": "tcp", 00:26:15.587 "traddr": "10.0.0.2", 00:26:15.587 "adrfam": "ipv4", 00:26:15.587 "trsvcid": "4420", 00:26:15.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:15.587 "hdgst": false, 00:26:15.587 "ddgst": false 00:26:15.587 }, 00:26:15.587 "method": "bdev_nvme_attach_controller" 00:26:15.587 }' 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:15.587 02:42:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:15.587 02:42:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:15.587 02:42:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:15.587 02:42:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:15.587 02:42:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:15.587 02:42:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.845 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:15.845 fio-3.35 00:26:15.846 Starting 1 thread 00:26:16.413 [2024-11-21 02:42:56.767873] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:16.413 [2024-11-21 02:42:56.767952] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:26.381 00:26:26.381 filename0: (groupid=0, jobs=1): err= 0: pid=91629: Thu Nov 21 02:43:06 2024 00:26:26.381 read: IOPS=7610, BW=29.7MiB/s (31.2MB/s)(298MiB/10031msec) 00:26:26.381 slat (usec): min=5, max=428, avg= 6.55, stdev= 2.20 00:26:26.381 clat (usec): min=346, max=42402, avg=506.38, stdev=2279.91 00:26:26.381 lat (usec): min=352, max=42411, avg=512.93, stdev=2279.97 00:26:26.381 clat percentiles (usec): 00:26:26.381 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 363], 00:26:26.381 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 375], 00:26:26.381 | 70.00th=[ 383], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 424], 00:26:26.381 | 99.00th=[ 478], 99.50th=[ 519], 99.90th=[41157], 99.95th=[41157], 00:26:26.381 | 99.99th=[42206] 00:26:26.381 bw ( KiB/s): min=20247, max=40576, per=100.00%, avg=30532.35, stdev=5512.83, samples=20 00:26:26.381 iops : min= 5061, max=10144, avg=7633.05, stdev=1378.28, samples=20 00:26:26.381 lat (usec) : 500=99.36%, 750=0.30%, 1000=0.02% 00:26:26.381 lat (msec) : 2=0.01%, 10=0.01%, 50=0.31% 00:26:26.381 cpu : usr=86.26%, sys=11.01%, ctx=20, majf=0, minf=9 00:26:26.381 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:26.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:26.381 issued rwts: total=76340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:26.381 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:26.381 00:26:26.381 Run status group 0 (all jobs): 00:26:26.381 READ: bw=29.7MiB/s (31.2MB/s), 29.7MiB/s-29.7MiB/s (31.2MB/s-31.2MB/s), io=298MiB (313MB), run=10031-10031msec 00:26:26.639 02:43:07 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:26.639 02:43:07 -- target/dif.sh@43 -- # local sub 00:26:26.639 02:43:07 -- target/dif.sh@45 -- # for sub in "$@" 00:26:26.639 02:43:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:26.639 02:43:07 -- target/dif.sh@36 -- # local sub_id=0 00:26:26.639 02:43:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:26.639 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.639 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.639 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.639 02:43:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:26.639 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.639 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.639 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.639 00:26:26.639 real 0m11.138s 00:26:26.639 user 0m9.390s 00:26:26.639 sys 0m1.391s 00:26:26.639 02:43:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:26.639 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.639 ************************************ 00:26:26.639 END TEST fio_dif_1_default 00:26:26.639 ************************************ 00:26:26.639 02:43:07 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:26.639 02:43:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:26.639 02:43:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.639 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.639 ************************************ 00:26:26.639 START TEST fio_dif_1_multi_subsystems 00:26:26.639 ************************************ 00:26:26.639 02:43:07 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:26:26.639 02:43:07 -- target/dif.sh@92 -- # local files=1 00:26:26.639 02:43:07 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:26.639 02:43:07 -- target/dif.sh@28 -- # local sub 00:26:26.639 02:43:07 -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.639 02:43:07 -- target/dif.sh@31 -- # create_subsystem 0 00:26:26.639 02:43:07 -- target/dif.sh@18 -- # local sub_id=0 00:26:26.639 02:43:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:26.639 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.639 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.639 bdev_null0 00:26:26.639 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.639 02:43:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:26.639 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.640 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.640 02:43:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:26.640 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.640 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.640 02:43:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.640 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.640 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.640 [2024-11-21 02:43:07.283436] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.898 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.898 02:43:07 -- target/dif.sh@30 -- # for sub in "$@" 00:26:26.898 02:43:07 -- target/dif.sh@31 -- # create_subsystem 1 00:26:26.898 02:43:07 -- target/dif.sh@18 -- # local sub_id=1 00:26:26.898 02:43:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:26.898 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.898 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.898 bdev_null1 00:26:26.898 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.898 02:43:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:26.899 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.899 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.899 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.899 02:43:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:26.899 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.899 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.899 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.899 02:43:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.899 02:43:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.899 02:43:07 -- common/autotest_common.sh@10 -- # set +x 00:26:26.899 02:43:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.899 02:43:07 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:26.899 02:43:07 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:26.899 02:43:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:26.899 02:43:07 -- nvmf/common.sh@520 -- # config=() 00:26:26.899 02:43:07 -- nvmf/common.sh@520 -- # local subsystem config 00:26:26.899 02:43:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:26.899 02:43:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.899 02:43:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:26.899 { 00:26:26.899 "params": { 00:26:26.899 "name": "Nvme$subsystem", 00:26:26.899 "trtype": "$TEST_TRANSPORT", 00:26:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.899 "adrfam": "ipv4", 00:26:26.899 "trsvcid": "$NVMF_PORT", 00:26:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.899 "hdgst": ${hdgst:-false}, 00:26:26.899 "ddgst": ${ddgst:-false} 00:26:26.899 }, 00:26:26.899 "method": "bdev_nvme_attach_controller" 00:26:26.899 } 00:26:26.899 EOF 00:26:26.899 )") 00:26:26.899 02:43:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:26.899 02:43:07 -- target/dif.sh@82 -- # gen_fio_conf 00:26:26.899 02:43:07 -- target/dif.sh@54 -- # local file 00:26:26.899 02:43:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:26.899 02:43:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:26.899 02:43:07 -- target/dif.sh@56 -- # cat 00:26:26.899 02:43:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:26.899 02:43:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.899 02:43:07 -- common/autotest_common.sh@1330 -- # shift 00:26:26.899 02:43:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:26.899 02:43:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.899 02:43:07 -- nvmf/common.sh@542 -- # cat 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:26.899 02:43:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:26.899 02:43:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:26.899 { 00:26:26.899 "params": { 00:26:26.899 "name": "Nvme$subsystem", 00:26:26.899 "trtype": "$TEST_TRANSPORT", 00:26:26.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.899 "adrfam": "ipv4", 00:26:26.899 "trsvcid": "$NVMF_PORT", 00:26:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.899 "hdgst": ${hdgst:-false}, 00:26:26.899 "ddgst": ${ddgst:-false} 00:26:26.899 }, 00:26:26.899 "method": "bdev_nvme_attach_controller" 00:26:26.899 } 00:26:26.899 EOF 00:26:26.899 )") 00:26:26.899 02:43:07 -- nvmf/common.sh@542 -- # cat 00:26:26.899 02:43:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:26.899 02:43:07 -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.899 02:43:07 -- target/dif.sh@73 -- # cat 00:26:26.899 02:43:07 -- target/dif.sh@72 -- # (( file++ )) 00:26:26.899 02:43:07 -- target/dif.sh@72 -- # (( file <= files )) 00:26:26.899 02:43:07 -- nvmf/common.sh@544 -- # jq . 00:26:26.899 02:43:07 -- nvmf/common.sh@545 -- # IFS=, 00:26:26.899 02:43:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:26.899 "params": { 00:26:26.899 "name": "Nvme0", 00:26:26.899 "trtype": "tcp", 00:26:26.899 "traddr": "10.0.0.2", 00:26:26.899 "adrfam": "ipv4", 00:26:26.899 "trsvcid": "4420", 00:26:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:26.899 "hdgst": false, 00:26:26.899 "ddgst": false 00:26:26.899 }, 00:26:26.899 "method": "bdev_nvme_attach_controller" 00:26:26.899 },{ 00:26:26.899 "params": { 00:26:26.899 "name": "Nvme1", 00:26:26.899 "trtype": "tcp", 00:26:26.899 "traddr": "10.0.0.2", 00:26:26.899 "adrfam": "ipv4", 00:26:26.899 "trsvcid": "4420", 00:26:26.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.899 "hdgst": false, 00:26:26.899 "ddgst": false 00:26:26.899 }, 00:26:26.899 "method": "bdev_nvme_attach_controller" 00:26:26.899 }' 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:26.899 02:43:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:26.899 02:43:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:26.899 02:43:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:26.899 02:43:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:26.899 02:43:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:26.899 02:43:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:27.158 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:27.158 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:27.158 fio-3.35 00:26:27.158 Starting 2 threads 00:26:27.724 [2024-11-21 02:43:08.120830] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:27.724 [2024-11-21 02:43:08.120907] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:37.692 00:26:37.692 filename0: (groupid=0, jobs=1): err= 0: pid=91789: Thu Nov 21 02:43:18 2024 00:26:37.692 read: IOPS=185, BW=741KiB/s (759kB/s)(7440KiB/10038msec) 00:26:37.692 slat (nsec): min=5987, max=45174, avg=9016.36, stdev=5028.02 00:26:37.692 clat (usec): min=353, max=41462, avg=21557.91, stdev=20200.99 00:26:37.692 lat (usec): min=360, max=41471, avg=21566.93, stdev=20200.91 00:26:37.692 clat percentiles (usec): 00:26:37.692 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 396], 20.00th=[ 412], 00:26:37.692 | 30.00th=[ 424], 40.00th=[ 449], 50.00th=[40633], 60.00th=[40633], 00:26:37.692 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:37.692 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:26:37.692 | 99.99th=[41681] 00:26:37.692 bw ( KiB/s): min= 512, max= 1024, per=45.50%, avg=742.40, stdev=142.58, samples=20 00:26:37.692 iops : min= 128, max= 256, avg=185.60, stdev=35.65, samples=20 00:26:37.692 lat (usec) : 500=45.16%, 750=1.94%, 1000=0.65% 00:26:37.692 lat (msec) : 50=52.26% 00:26:37.692 cpu : usr=97.88%, sys=1.74%, ctx=13, majf=0, minf=0 00:26:37.692 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.692 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.692 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:37.692 filename1: (groupid=0, jobs=1): err= 0: pid=91790: Thu Nov 21 02:43:18 2024 00:26:37.692 read: IOPS=223, BW=892KiB/s (914kB/s)(8928KiB/10005msec) 00:26:37.692 slat (usec): min=6, max=153, avg= 9.33, stdev= 5.97 00:26:37.692 clat (usec): min=347, max=41681, avg=17900.09, stdev=20051.69 00:26:37.692 lat (usec): min=353, max=41705, avg=17909.42, stdev=20051.65 00:26:37.692 clat percentiles (usec): 00:26:37.692 | 1.00th=[ 355], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 396], 00:26:37.692 | 30.00th=[ 408], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[40633], 00:26:37.692 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:37.692 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:37.692 | 99.99th=[41681] 00:26:37.692 bw ( KiB/s): min= 608, max= 1248, per=54.52%, avg=889.26, stdev=183.70, samples=19 00:26:37.692 iops : min= 152, max= 312, avg=222.32, stdev=45.92, samples=19 00:26:37.692 lat (usec) : 500=53.00%, 750=3.45%, 1000=0.18% 00:26:37.692 lat (msec) : 2=0.18%, 50=43.19% 00:26:37.692 cpu : usr=97.18%, sys=2.19%, ctx=103, majf=0, minf=0 00:26:37.692 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:37.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:37.692 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:37.692 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:37.692 00:26:37.692 Run status group 0 (all jobs): 00:26:37.692 READ: bw=1631KiB/s (1670kB/s), 741KiB/s-892KiB/s (759kB/s-914kB/s), io=16.0MiB (16.8MB), run=10005-10038msec 00:26:37.950 02:43:18 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:37.950 02:43:18 -- target/dif.sh@43 -- # local sub 00:26:37.950 02:43:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:37.950 02:43:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:37.950 02:43:18 -- target/dif.sh@36 -- # local sub_id=0 00:26:37.950 02:43:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:37.950 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.950 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:37.950 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.950 02:43:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:37.950 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.950 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:37.950 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.950 02:43:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:37.950 02:43:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:37.950 02:43:18 -- target/dif.sh@36 -- # local sub_id=1 00:26:37.950 02:43:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.950 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.950 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:37.950 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.950 02:43:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:37.950 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.950 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:37.950 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.950 00:26:37.950 real 0m11.281s 00:26:37.950 user 0m20.403s 00:26:37.950 sys 0m0.719s 00:26:37.950 02:43:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:37.950 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:37.950 ************************************ 00:26:37.950 END TEST fio_dif_1_multi_subsystems 00:26:37.950 ************************************ 00:26:37.950 02:43:18 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:37.950 02:43:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:37.950 02:43:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:37.950 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.208 ************************************ 00:26:38.208 START TEST fio_dif_rand_params 00:26:38.208 ************************************ 00:26:38.208 02:43:18 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:38.208 02:43:18 -- target/dif.sh@100 -- # local NULL_DIF 00:26:38.208 02:43:18 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:38.208 02:43:18 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:38.208 02:43:18 -- target/dif.sh@103 -- # bs=128k 00:26:38.208 02:43:18 -- target/dif.sh@103 -- # numjobs=3 00:26:38.208 02:43:18 -- target/dif.sh@103 -- # iodepth=3 00:26:38.208 02:43:18 -- target/dif.sh@103 -- # runtime=5 00:26:38.209 02:43:18 -- target/dif.sh@105 -- # create_subsystems 0 00:26:38.209 02:43:18 -- target/dif.sh@28 -- # local sub 00:26:38.209 02:43:18 -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.209 02:43:18 -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.209 02:43:18 -- target/dif.sh@18 -- # local sub_id=0 00:26:38.209 02:43:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:38.209 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.209 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.209 bdev_null0 00:26:38.209 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.209 02:43:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.209 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.209 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.209 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.209 02:43:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.209 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.209 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.209 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.209 02:43:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.209 02:43:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.209 02:43:18 -- common/autotest_common.sh@10 -- # set +x 00:26:38.209 [2024-11-21 02:43:18.627244] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.209 02:43:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.209 02:43:18 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:38.209 02:43:18 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:38.209 02:43:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:38.209 02:43:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.209 02:43:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.209 02:43:18 -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.209 02:43:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:38.209 02:43:18 -- target/dif.sh@54 -- # local file 00:26:38.209 02:43:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.209 02:43:18 -- target/dif.sh@56 -- # cat 00:26:38.209 02:43:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:38.209 02:43:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:38.209 02:43:18 -- common/autotest_common.sh@1330 -- # shift 00:26:38.209 02:43:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:38.209 02:43:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.209 02:43:18 -- nvmf/common.sh@520 -- # config=() 00:26:38.209 02:43:18 -- nvmf/common.sh@520 -- # local subsystem config 00:26:38.209 02:43:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.209 02:43:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.209 { 00:26:38.209 "params": { 00:26:38.209 "name": "Nvme$subsystem", 00:26:38.209 "trtype": "$TEST_TRANSPORT", 00:26:38.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.209 "adrfam": "ipv4", 00:26:38.209 "trsvcid": "$NVMF_PORT", 00:26:38.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.209 "hdgst": ${hdgst:-false}, 00:26:38.209 "ddgst": ${ddgst:-false} 00:26:38.209 }, 00:26:38.209 "method": "bdev_nvme_attach_controller" 00:26:38.209 } 00:26:38.209 EOF 00:26:38.209 )") 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:38.209 02:43:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.209 02:43:18 -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:38.209 02:43:18 -- nvmf/common.sh@542 -- # cat 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:38.209 02:43:18 -- nvmf/common.sh@544 -- # jq . 00:26:38.209 02:43:18 -- nvmf/common.sh@545 -- # IFS=, 00:26:38.209 02:43:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:38.209 "params": { 00:26:38.209 "name": "Nvme0", 00:26:38.209 "trtype": "tcp", 00:26:38.209 "traddr": "10.0.0.2", 00:26:38.209 "adrfam": "ipv4", 00:26:38.209 "trsvcid": "4420", 00:26:38.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.209 "hdgst": false, 00:26:38.209 "ddgst": false 00:26:38.209 }, 00:26:38.209 "method": "bdev_nvme_attach_controller" 00:26:38.209 }' 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:38.209 02:43:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:38.209 02:43:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:38.209 02:43:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:38.209 02:43:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:38.209 02:43:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:38.209 02:43:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.467 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:38.467 ... 00:26:38.467 fio-3.35 00:26:38.467 Starting 3 threads 00:26:38.724 [2024-11-21 02:43:19.267573] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:38.724 [2024-11-21 02:43:19.267645] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:43.990 00:26:43.990 filename0: (groupid=0, jobs=1): err= 0: pid=91946: Thu Nov 21 02:43:24 2024 00:26:43.990 read: IOPS=339, BW=42.4MiB/s (44.5MB/s)(212MiB/5003msec) 00:26:43.990 slat (nsec): min=5931, max=44269, avg=9379.08, stdev=4740.16 00:26:43.990 clat (usec): min=3488, max=48786, avg=8814.28, stdev=4064.53 00:26:43.990 lat (usec): min=3494, max=48796, avg=8823.66, stdev=4064.92 00:26:43.990 clat percentiles (usec): 00:26:43.990 | 1.00th=[ 3621], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 4015], 00:26:43.990 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9765], 00:26:43.990 | 70.00th=[11731], 80.00th=[12387], 90.00th=[12780], 95.00th=[13304], 00:26:43.990 | 99.00th=[14091], 99.50th=[14353], 99.90th=[47449], 99.95th=[49021], 00:26:43.990 | 99.99th=[49021] 00:26:43.990 bw ( KiB/s): min=35328, max=52992, per=41.39%, avg=43690.67, stdev=6419.17, samples=9 00:26:43.990 iops : min= 276, max= 414, avg=341.33, stdev=50.15, samples=9 00:26:43.990 lat (msec) : 4=19.79%, 10=40.87%, 20=38.99%, 50=0.35% 00:26:43.990 cpu : usr=91.00%, sys=6.66%, ctx=11, majf=0, minf=0 00:26:43.990 IO depths : 1=32.6%, 2=67.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.990 issued rwts: total=1698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.990 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.990 filename0: (groupid=0, jobs=1): err= 0: pid=91947: Thu Nov 21 02:43:24 2024 00:26:43.990 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5005msec) 00:26:43.990 slat (nsec): min=5803, max=56917, avg=12024.04, stdev=5514.80 00:26:43.990 clat (usec): min=3264, max=53050, avg=11696.33, stdev=10306.98 00:26:43.990 lat (usec): min=3273, max=53059, avg=11708.36, stdev=10307.04 00:26:43.990 clat percentiles (usec): 00:26:43.990 | 1.00th=[ 5014], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 6652], 00:26:43.990 | 30.00th=[ 7111], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10421], 00:26:43.990 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11994], 95.00th=[47973], 00:26:43.990 | 99.00th=[51643], 99.50th=[51643], 99.90th=[52167], 99.95th=[53216], 00:26:43.990 | 99.99th=[53216] 00:26:43.990 bw ( KiB/s): min=22272, max=41472, per=30.85%, avg=32568.89, stdev=6215.29, samples=9 00:26:43.990 iops : min= 174, max= 324, avg=254.44, stdev=48.56, samples=9 00:26:43.991 lat (msec) : 4=0.62%, 10=51.33%, 20=41.26%, 50=4.84%, 100=1.95% 00:26:43.991 cpu : usr=93.78%, sys=4.86%, ctx=8, majf=0, minf=0 00:26:43.991 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.991 issued rwts: total=1282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.991 filename0: (groupid=0, jobs=1): err= 0: pid=91948: Thu Nov 21 02:43:24 2024 00:26:43.991 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5007msec) 00:26:43.991 slat (usec): min=5, max=220, avg=14.91, stdev= 9.89 00:26:43.991 clat (usec): min=3652, max=51636, avg=13047.93, stdev=12770.15 00:26:43.991 lat (usec): min=3662, max=51655, avg=13062.83, stdev=12769.96 00:26:43.991 clat percentiles (usec): 00:26:43.991 | 1.00th=[ 3818], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[ 7504], 00:26:43.991 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:26:43.991 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[47449], 95.00th=[49546], 00:26:43.991 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51643], 00:26:43.991 | 99.99th=[51643] 00:26:43.991 bw ( KiB/s): min=20480, max=38144, per=27.79%, avg=29337.60, stdev=5452.26, samples=10 00:26:43.991 iops : min= 160, max= 298, avg=229.20, stdev=42.60, samples=10 00:26:43.991 lat (msec) : 4=1.31%, 10=75.98%, 20=11.75%, 50=8.09%, 100=2.87% 00:26:43.991 cpu : usr=95.05%, sys=3.62%, ctx=39, majf=0, minf=0 00:26:43.991 IO depths : 1=5.7%, 2=94.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:43.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.991 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.991 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:43.991 00:26:43.991 Run status group 0 (all jobs): 00:26:43.991 READ: bw=103MiB/s (108MB/s), 28.7MiB/s-42.4MiB/s (30.1MB/s-44.5MB/s), io=516MiB (541MB), run=5003-5007msec 00:26:43.991 02:43:24 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:43.991 02:43:24 -- target/dif.sh@43 -- # local sub 00:26:43.991 02:43:24 -- target/dif.sh@45 -- # for sub in "$@" 00:26:43.991 02:43:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:43.991 02:43:24 -- target/dif.sh@36 -- # local sub_id=0 00:26:43.991 02:43:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:43.991 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.991 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:43.991 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.991 02:43:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:43.991 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.991 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:44.250 02:43:24 -- target/dif.sh@109 -- # bs=4k 00:26:44.250 02:43:24 -- target/dif.sh@109 -- # numjobs=8 00:26:44.250 02:43:24 -- target/dif.sh@109 -- # iodepth=16 00:26:44.250 02:43:24 -- target/dif.sh@109 -- # runtime= 00:26:44.250 02:43:24 -- target/dif.sh@109 -- # files=2 00:26:44.250 02:43:24 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:44.250 02:43:24 -- target/dif.sh@28 -- # local sub 00:26:44.250 02:43:24 -- target/dif.sh@30 -- # for sub in "$@" 00:26:44.250 02:43:24 -- target/dif.sh@31 -- # create_subsystem 0 00:26:44.250 02:43:24 -- target/dif.sh@18 -- # local sub_id=0 00:26:44.250 02:43:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 bdev_null0 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 [2024-11-21 02:43:24.672622] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@30 -- # for sub in "$@" 00:26:44.250 02:43:24 -- target/dif.sh@31 -- # create_subsystem 1 00:26:44.250 02:43:24 -- target/dif.sh@18 -- # local sub_id=1 00:26:44.250 02:43:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 bdev_null1 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@30 -- # for sub in "$@" 00:26:44.250 02:43:24 -- target/dif.sh@31 -- # create_subsystem 2 00:26:44.250 02:43:24 -- target/dif.sh@18 -- # local sub_id=2 00:26:44.250 02:43:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.250 bdev_null2 00:26:44.250 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.250 02:43:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:44.250 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.250 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.251 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.251 02:43:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:44.251 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.251 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.251 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.251 02:43:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:44.251 02:43:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.251 02:43:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.251 02:43:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.251 02:43:24 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:44.251 02:43:24 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:44.251 02:43:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:44.251 02:43:24 -- nvmf/common.sh@520 -- # config=() 00:26:44.251 02:43:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.251 02:43:24 -- nvmf/common.sh@520 -- # local subsystem config 00:26:44.251 02:43:24 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.251 02:43:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:44.251 02:43:24 -- target/dif.sh@82 -- # gen_fio_conf 00:26:44.251 02:43:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:44.251 { 00:26:44.251 "params": { 00:26:44.251 "name": "Nvme$subsystem", 00:26:44.251 "trtype": "$TEST_TRANSPORT", 00:26:44.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.251 "adrfam": "ipv4", 00:26:44.251 "trsvcid": "$NVMF_PORT", 00:26:44.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.251 "hdgst": ${hdgst:-false}, 00:26:44.251 "ddgst": ${ddgst:-false} 00:26:44.251 }, 00:26:44.251 "method": "bdev_nvme_attach_controller" 00:26:44.251 } 00:26:44.251 EOF 00:26:44.251 )") 00:26:44.251 02:43:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:44.251 02:43:24 -- target/dif.sh@54 -- # local file 00:26:44.251 02:43:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:44.251 02:43:24 -- target/dif.sh@56 -- # cat 00:26:44.251 02:43:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:44.251 02:43:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.251 02:43:24 -- common/autotest_common.sh@1330 -- # shift 00:26:44.251 02:43:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:44.251 02:43:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.251 02:43:24 -- nvmf/common.sh@542 -- # cat 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:44.251 02:43:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:44.251 02:43:24 -- target/dif.sh@72 -- # (( file <= files )) 00:26:44.251 02:43:24 -- target/dif.sh@73 -- # cat 00:26:44.251 02:43:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:44.251 02:43:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:44.251 { 00:26:44.251 "params": { 00:26:44.251 "name": "Nvme$subsystem", 00:26:44.251 "trtype": "$TEST_TRANSPORT", 00:26:44.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.251 "adrfam": "ipv4", 00:26:44.251 "trsvcid": "$NVMF_PORT", 00:26:44.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.251 "hdgst": ${hdgst:-false}, 00:26:44.251 "ddgst": ${ddgst:-false} 00:26:44.251 }, 00:26:44.251 "method": "bdev_nvme_attach_controller" 00:26:44.251 } 00:26:44.251 EOF 00:26:44.251 )") 00:26:44.251 02:43:24 -- nvmf/common.sh@542 -- # cat 00:26:44.251 02:43:24 -- target/dif.sh@72 -- # (( file++ )) 00:26:44.251 02:43:24 -- target/dif.sh@72 -- # (( file <= files )) 00:26:44.251 02:43:24 -- target/dif.sh@73 -- # cat 00:26:44.251 02:43:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:44.251 02:43:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:44.251 { 00:26:44.251 "params": { 00:26:44.251 "name": "Nvme$subsystem", 00:26:44.251 "trtype": "$TEST_TRANSPORT", 00:26:44.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.251 "adrfam": "ipv4", 00:26:44.251 "trsvcid": "$NVMF_PORT", 00:26:44.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.251 "hdgst": ${hdgst:-false}, 00:26:44.251 "ddgst": ${ddgst:-false} 00:26:44.251 }, 00:26:44.251 "method": "bdev_nvme_attach_controller" 00:26:44.251 } 00:26:44.251 EOF 00:26:44.251 )") 00:26:44.251 02:43:24 -- target/dif.sh@72 -- # (( file++ )) 00:26:44.251 02:43:24 -- target/dif.sh@72 -- # (( file <= files )) 00:26:44.251 02:43:24 -- nvmf/common.sh@542 -- # cat 00:26:44.251 02:43:24 -- nvmf/common.sh@544 -- # jq . 00:26:44.251 02:43:24 -- nvmf/common.sh@545 -- # IFS=, 00:26:44.251 02:43:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:44.251 "params": { 00:26:44.251 "name": "Nvme0", 00:26:44.251 "trtype": "tcp", 00:26:44.251 "traddr": "10.0.0.2", 00:26:44.251 "adrfam": "ipv4", 00:26:44.251 "trsvcid": "4420", 00:26:44.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:44.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:44.251 "hdgst": false, 00:26:44.251 "ddgst": false 00:26:44.251 }, 00:26:44.251 "method": "bdev_nvme_attach_controller" 00:26:44.251 },{ 00:26:44.251 "params": { 00:26:44.251 "name": "Nvme1", 00:26:44.251 "trtype": "tcp", 00:26:44.251 "traddr": "10.0.0.2", 00:26:44.251 "adrfam": "ipv4", 00:26:44.251 "trsvcid": "4420", 00:26:44.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:44.251 "hdgst": false, 00:26:44.251 "ddgst": false 00:26:44.251 }, 00:26:44.251 "method": "bdev_nvme_attach_controller" 00:26:44.251 },{ 00:26:44.251 "params": { 00:26:44.251 "name": "Nvme2", 00:26:44.251 "trtype": "tcp", 00:26:44.251 "traddr": "10.0.0.2", 00:26:44.251 "adrfam": "ipv4", 00:26:44.251 "trsvcid": "4420", 00:26:44.251 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:44.251 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:44.251 "hdgst": false, 00:26:44.251 "ddgst": false 00:26:44.251 }, 00:26:44.251 "method": "bdev_nvme_attach_controller" 00:26:44.251 }' 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:44.251 02:43:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:44.251 02:43:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:44.251 02:43:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:44.251 02:43:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:44.251 02:43:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:44.251 02:43:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:44.510 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:44.510 ... 00:26:44.510 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:44.510 ... 00:26:44.510 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:44.510 ... 00:26:44.510 fio-3.35 00:26:44.510 Starting 24 threads 00:26:45.076 [2024-11-21 02:43:25.608445] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:45.076 [2024-11-21 02:43:25.608493] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:57.272 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92045: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=224, BW=896KiB/s (918kB/s)(8980KiB/10022msec) 00:26:57.272 slat (usec): min=4, max=8032, avg=15.65, stdev=169.43 00:26:57.272 clat (msec): min=24, max=163, avg=71.33, stdev=22.40 00:26:57.272 lat (msec): min=24, max=163, avg=71.35, stdev=22.40 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 53], 00:26:57.272 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 74], 00:26:57.272 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 117], 00:26:57.272 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:26:57.272 | 99.99th=[ 165] 00:26:57.272 bw ( KiB/s): min= 640, max= 1216, per=3.85%, avg=884.63, stdev=171.26, samples=19 00:26:57.272 iops : min= 160, max= 304, avg=221.16, stdev=42.81, samples=19 00:26:57.272 lat (msec) : 50=18.44%, 100=70.33%, 250=11.22% 00:26:57.272 cpu : usr=33.26%, sys=0.42%, ctx=938, majf=0, minf=9 00:26:57.272 IO depths : 1=1.6%, 2=3.3%, 4=10.7%, 8=72.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92046: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=278, BW=1115KiB/s (1142kB/s)(10.9MiB/10048msec) 00:26:57.272 slat (usec): min=6, max=7030, avg=18.14, stdev=186.61 00:26:57.272 clat (msec): min=8, max=120, avg=57.24, stdev=21.41 00:26:57.272 lat (msec): min=8, max=120, avg=57.26, stdev=21.41 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 41], 00:26:57.272 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 59], 00:26:57.272 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 90], 95.00th=[ 104], 00:26:57.272 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:26:57.272 | 99.99th=[ 121] 00:26:57.272 bw ( KiB/s): min= 688, max= 1595, per=4.84%, avg=1113.35, stdev=228.88, samples=20 00:26:57.272 iops : min= 172, max= 398, avg=278.30, stdev=57.14, samples=20 00:26:57.272 lat (msec) : 10=1.39%, 20=0.64%, 50=41.06%, 100=51.45%, 250=5.46% 00:26:57.272 cpu : usr=40.89%, sys=0.48%, ctx=1149, majf=0, minf=9 00:26:57.272 IO depths : 1=0.2%, 2=0.6%, 4=5.5%, 8=79.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=89.1%, 8=7.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92047: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=217, BW=872KiB/s (893kB/s)(8732KiB/10016msec) 00:26:57.272 slat (usec): min=6, max=4071, avg=16.98, stdev=122.10 00:26:57.272 clat (msec): min=34, max=143, avg=73.27, stdev=20.80 00:26:57.272 lat (msec): min=34, max=143, avg=73.29, stdev=20.80 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:26:57.272 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 79], 00:26:57.272 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 108], 00:26:57.272 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:26:57.272 | 99.99th=[ 144] 00:26:57.272 bw ( KiB/s): min= 640, max= 1128, per=3.77%, avg=867.84, stdev=145.84, samples=19 00:26:57.272 iops : min= 160, max= 282, avg=216.95, stdev=36.47, samples=19 00:26:57.272 lat (msec) : 50=12.64%, 100=75.68%, 250=11.68% 00:26:57.272 cpu : usr=33.66%, sys=0.43%, ctx=893, majf=0, minf=9 00:26:57.272 IO depths : 1=2.3%, 2=5.0%, 4=13.7%, 8=68.3%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92048: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=242, BW=969KiB/s (992kB/s)(9736KiB/10049msec) 00:26:57.272 slat (usec): min=4, max=12019, avg=20.60, stdev=292.79 00:26:57.272 clat (msec): min=9, max=140, avg=65.84, stdev=21.37 00:26:57.272 lat (msec): min=9, max=140, avg=65.86, stdev=21.37 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:26:57.272 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 71], 00:26:57.272 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 94], 95.00th=[ 102], 00:26:57.272 | 99.00th=[ 120], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 140], 00:26:57.272 | 99.99th=[ 140] 00:26:57.272 bw ( KiB/s): min= 768, max= 1502, per=4.21%, avg=968.70, stdev=185.18, samples=20 00:26:57.272 iops : min= 192, max= 375, avg=242.15, stdev=46.22, samples=20 00:26:57.272 lat (msec) : 10=0.66%, 20=1.31%, 50=21.57%, 100=70.71%, 250=5.75% 00:26:57.272 cpu : usr=33.37%, sys=0.34%, ctx=916, majf=0, minf=9 00:26:57.272 IO depths : 1=1.5%, 2=3.2%, 4=11.2%, 8=72.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92049: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=219, BW=879KiB/s (900kB/s)(8804KiB/10017msec) 00:26:57.272 slat (usec): min=4, max=8014, avg=24.63, stdev=269.06 00:26:57.272 clat (msec): min=26, max=152, avg=72.64, stdev=21.99 00:26:57.272 lat (msec): min=26, max=152, avg=72.66, stdev=21.99 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 55], 00:26:57.272 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 78], 00:26:57.272 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 102], 95.00th=[ 110], 00:26:57.272 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 153], 00:26:57.272 | 99.99th=[ 153] 00:26:57.272 bw ( KiB/s): min= 592, max= 1200, per=3.77%, avg=866.11, stdev=154.39, samples=19 00:26:57.272 iops : min= 148, max= 300, avg=216.53, stdev=38.60, samples=19 00:26:57.272 lat (msec) : 50=14.81%, 100=74.10%, 250=11.09% 00:26:57.272 cpu : usr=36.11%, sys=0.51%, ctx=973, majf=0, minf=9 00:26:57.272 IO depths : 1=2.4%, 2=5.3%, 4=14.7%, 8=66.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92050: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.3MiB/10043msec) 00:26:57.272 slat (usec): min=4, max=4034, avg=17.96, stdev=146.33 00:26:57.272 clat (msec): min=7, max=132, avg=60.92, stdev=20.80 00:26:57.272 lat (msec): min=7, max=132, avg=60.94, stdev=20.81 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 43], 00:26:57.272 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 63], 00:26:57.272 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 100], 00:26:57.272 | 99.00th=[ 120], 99.50th=[ 131], 99.90th=[ 133], 99.95th=[ 133], 00:26:57.272 | 99.99th=[ 133] 00:26:57.272 bw ( KiB/s): min= 712, max= 1512, per=4.55%, avg=1046.70, stdev=208.90, samples=20 00:26:57.272 iops : min= 178, max= 378, avg=261.65, stdev=52.20, samples=20 00:26:57.272 lat (msec) : 10=0.61%, 20=0.61%, 50=29.83%, 100=64.35%, 250=4.60% 00:26:57.272 cpu : usr=43.42%, sys=0.48%, ctx=1373, majf=0, minf=9 00:26:57.272 IO depths : 1=1.0%, 2=2.4%, 4=9.0%, 8=75.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92051: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10021msec) 00:26:57.272 slat (usec): min=3, max=4031, avg=16.46, stdev=139.54 00:26:57.272 clat (msec): min=20, max=146, avg=61.18, stdev=19.97 00:26:57.272 lat (msec): min=20, max=146, avg=61.19, stdev=19.97 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 44], 00:26:57.272 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 63], 00:26:57.272 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 95], 00:26:57.272 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 146], 99.95th=[ 146], 00:26:57.272 | 99.99th=[ 146] 00:26:57.272 bw ( KiB/s): min= 768, max= 1376, per=4.52%, avg=1039.20, stdev=191.08, samples=20 00:26:57.272 iops : min= 192, max= 344, avg=259.80, stdev=47.77, samples=20 00:26:57.272 lat (msec) : 50=32.82%, 100=63.81%, 250=3.37% 00:26:57.272 cpu : usr=43.55%, sys=0.68%, ctx=1371, majf=0, minf=9 00:26:57.272 IO depths : 1=1.3%, 2=3.6%, 4=11.2%, 8=71.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:57.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.272 issued rwts: total=2614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.272 filename0: (groupid=0, jobs=1): err= 0: pid=92052: Thu Nov 21 02:43:35 2024 00:26:57.272 read: IOPS=234, BW=938KiB/s (961kB/s)(9396KiB/10014msec) 00:26:57.272 slat (usec): min=5, max=8026, avg=23.94, stdev=297.95 00:26:57.272 clat (msec): min=22, max=143, avg=68.05, stdev=20.85 00:26:57.272 lat (msec): min=22, max=143, avg=68.07, stdev=20.85 00:26:57.272 clat percentiles (msec): 00:26:57.272 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 51], 00:26:57.272 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 71], 00:26:57.272 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 107], 00:26:57.272 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:26:57.272 | 99.99th=[ 144] 00:26:57.272 bw ( KiB/s): min= 640, max= 1144, per=4.04%, avg=928.00, stdev=133.92, samples=19 00:26:57.272 iops : min= 160, max= 286, avg=232.00, stdev=33.48, samples=19 00:26:57.272 lat (msec) : 50=19.28%, 100=72.54%, 250=8.17% 00:26:57.272 cpu : usr=32.60%, sys=0.40%, ctx=892, majf=0, minf=9 00:26:57.273 IO depths : 1=0.9%, 2=2.2%, 4=8.8%, 8=75.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92053: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=221, BW=887KiB/s (909kB/s)(8880KiB/10007msec) 00:26:57.273 slat (usec): min=4, max=7979, avg=19.72, stdev=207.57 00:26:57.273 clat (msec): min=6, max=149, avg=71.93, stdev=21.65 00:26:57.273 lat (msec): min=6, max=149, avg=71.95, stdev=21.65 00:26:57.273 clat percentiles (msec): 00:26:57.273 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 56], 00:26:57.273 | 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 79], 00:26:57.273 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 100], 95.00th=[ 108], 00:26:57.273 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:26:57.273 | 99.99th=[ 150] 00:26:57.273 bw ( KiB/s): min= 640, max= 1152, per=3.86%, avg=886.05, stdev=146.60, samples=20 00:26:57.273 iops : min= 160, max= 288, avg=221.50, stdev=36.65, samples=20 00:26:57.273 lat (msec) : 10=0.23%, 50=13.78%, 100=76.04%, 250=9.95% 00:26:57.273 cpu : usr=40.61%, sys=0.42%, ctx=1155, majf=0, minf=9 00:26:57.273 IO depths : 1=2.8%, 2=6.8%, 4=18.2%, 8=62.0%, 16=10.1%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=92.2%, 8=2.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92054: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10021msec) 00:26:57.273 slat (usec): min=3, max=8028, avg=25.65, stdev=283.13 00:26:57.273 clat (msec): min=24, max=146, avg=61.41, stdev=18.27 00:26:57.273 lat (msec): min=24, max=146, avg=61.43, stdev=18.28 00:26:57.273 clat percentiles (msec): 00:26:57.273 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:26:57.273 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 64], 00:26:57.273 | 70.00th=[ 69], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 92], 00:26:57.273 | 99.00th=[ 114], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 146], 00:26:57.273 | 99.99th=[ 146] 00:26:57.273 bw ( KiB/s): min= 768, max= 1296, per=4.51%, avg=1036.80, stdev=153.76, samples=20 00:26:57.273 iops : min= 192, max= 324, avg=259.15, stdev=38.39, samples=20 00:26:57.273 lat (msec) : 50=27.62%, 100=70.50%, 250=1.88% 00:26:57.273 cpu : usr=44.35%, sys=0.50%, ctx=1285, majf=0, minf=9 00:26:57.273 IO depths : 1=1.5%, 2=3.3%, 4=11.9%, 8=71.4%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92055: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=240, BW=962KiB/s (986kB/s)(9648KiB/10024msec) 00:26:57.273 slat (usec): min=6, max=4006, avg=14.25, stdev=81.70 00:26:57.273 clat (msec): min=23, max=146, avg=66.30, stdev=21.22 00:26:57.273 lat (msec): min=23, max=146, avg=66.31, stdev=21.22 00:26:57.273 clat percentiles (msec): 00:26:57.273 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 47], 00:26:57.273 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 71], 00:26:57.273 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 95], 95.00th=[ 105], 00:26:57.273 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 148], 99.95th=[ 148], 00:26:57.273 | 99.99th=[ 148] 00:26:57.273 bw ( KiB/s): min= 688, max= 1168, per=4.19%, avg=962.50, stdev=160.20, samples=20 00:26:57.273 iops : min= 172, max= 292, avg=240.60, stdev=40.03, samples=20 00:26:57.273 lat (msec) : 50=29.93%, 100=63.76%, 250=6.30% 00:26:57.273 cpu : usr=32.44%, sys=0.50%, ctx=909, majf=0, minf=9 00:26:57.273 IO depths : 1=1.0%, 2=2.3%, 4=9.7%, 8=74.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92056: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=223, BW=895KiB/s (916kB/s)(8952KiB/10003msec) 00:26:57.273 slat (usec): min=4, max=8033, avg=23.08, stdev=267.89 00:26:57.273 clat (msec): min=6, max=131, avg=71.36, stdev=22.36 00:26:57.273 lat (msec): min=6, max=131, avg=71.38, stdev=22.35 00:26:57.273 clat percentiles (msec): 00:26:57.273 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 54], 00:26:57.273 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 78], 00:26:57.273 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 117], 00:26:57.273 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 132], 00:26:57.273 | 99.99th=[ 132] 00:26:57.273 bw ( KiB/s): min= 512, max= 1280, per=3.81%, avg=875.79, stdev=175.66, samples=19 00:26:57.273 iops : min= 128, max= 320, avg=218.95, stdev=43.92, samples=19 00:26:57.273 lat (msec) : 10=0.71%, 50=12.56%, 100=77.84%, 250=8.89% 00:26:57.273 cpu : usr=44.06%, sys=0.57%, ctx=1379, majf=0, minf=9 00:26:57.273 IO depths : 1=2.2%, 2=5.4%, 4=15.3%, 8=66.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92057: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=229, BW=917KiB/s (939kB/s)(9172KiB/10002msec) 00:26:57.273 slat (usec): min=4, max=8019, avg=17.88, stdev=187.12 00:26:57.273 clat (usec): min=1696, max=143372, avg=69690.68, stdev=21639.77 00:26:57.273 lat (usec): min=1703, max=143396, avg=69708.56, stdev=21640.96 00:26:57.273 clat percentiles (msec): 00:26:57.273 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 53], 00:26:57.273 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 73], 00:26:57.273 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:26:57.273 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:26:57.273 | 99.99th=[ 144] 00:26:57.273 bw ( KiB/s): min= 640, max= 1024, per=3.89%, avg=893.05, stdev=123.63, samples=19 00:26:57.273 iops : min= 160, max= 256, avg=223.26, stdev=30.91, samples=19 00:26:57.273 lat (msec) : 2=0.57%, 4=0.13%, 10=0.83%, 50=15.22%, 100=74.79% 00:26:57.273 lat (msec) : 250=8.46% 00:26:57.273 cpu : usr=32.94%, sys=0.54%, ctx=913, majf=0, minf=9 00:26:57.273 IO depths : 1=1.0%, 2=2.6%, 4=10.9%, 8=72.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=90.7%, 8=5.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92058: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=289, BW=1157KiB/s (1185kB/s)(11.3MiB/10034msec) 00:26:57.273 slat (usec): min=6, max=10036, avg=22.83, stdev=320.94 00:26:57.273 clat (usec): min=854, max=124122, avg=55046.37, stdev=23982.35 00:26:57.273 lat (usec): min=864, max=124140, avg=55069.20, stdev=23986.89 00:26:57.273 clat percentiles (usec): 00:26:57.273 | 1.00th=[ 1434], 5.00th=[ 3884], 10.00th=[ 30278], 20.00th=[ 38536], 00:26:57.273 | 30.00th=[ 42730], 40.00th=[ 48497], 50.00th=[ 55313], 60.00th=[ 58983], 00:26:57.273 | 70.00th=[ 66323], 80.00th=[ 74974], 90.00th=[ 84411], 95.00th=[ 92799], 00:26:57.273 | 99.00th=[114820], 99.50th=[124257], 99.90th=[124257], 99.95th=[124257], 00:26:57.273 | 99.99th=[124257] 00:26:57.273 bw ( KiB/s): min= 688, max= 2816, per=5.03%, avg=1156.35, stdev=444.23, samples=20 00:26:57.273 iops : min= 172, max= 704, avg=289.05, stdev=111.08, samples=20 00:26:57.273 lat (usec) : 1000=0.07% 00:26:57.273 lat (msec) : 2=4.03%, 4=1.34%, 10=1.72%, 20=0.69%, 50=33.77% 00:26:57.273 lat (msec) : 100=55.69%, 250=2.69% 00:26:57.273 cpu : usr=42.98%, sys=0.54%, ctx=1403, majf=0, minf=0 00:26:57.273 IO depths : 1=1.3%, 2=3.3%, 4=11.7%, 8=71.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92059: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=254, BW=1019KiB/s (1044kB/s)(9.98MiB/10024msec) 00:26:57.273 slat (usec): min=6, max=7053, avg=14.49, stdev=139.55 00:26:57.273 clat (msec): min=18, max=121, avg=62.63, stdev=19.63 00:26:57.273 lat (msec): min=18, max=122, avg=62.64, stdev=19.63 00:26:57.273 clat percentiles (msec): 00:26:57.273 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 47], 00:26:57.273 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 64], 00:26:57.273 | 70.00th=[ 70], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 97], 00:26:57.273 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 123], 99.95th=[ 123], 00:26:57.273 | 99.99th=[ 123] 00:26:57.273 bw ( KiB/s): min= 768, max= 1280, per=4.43%, avg=1019.20, stdev=164.49, samples=20 00:26:57.273 iops : min= 192, max= 320, avg=254.80, stdev=41.12, samples=20 00:26:57.273 lat (msec) : 20=0.23%, 50=26.55%, 100=69.85%, 250=3.37% 00:26:57.273 cpu : usr=41.99%, sys=0.60%, ctx=1361, majf=0, minf=9 00:26:57.273 IO depths : 1=1.5%, 2=3.3%, 4=10.4%, 8=72.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:57.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.273 issued rwts: total=2554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.273 filename1: (groupid=0, jobs=1): err= 0: pid=92060: Thu Nov 21 02:43:35 2024 00:26:57.273 read: IOPS=229, BW=916KiB/s (938kB/s)(9204KiB/10044msec) 00:26:57.273 slat (nsec): min=4819, max=47531, avg=12402.19, stdev=7462.00 00:26:57.273 clat (msec): min=6, max=145, avg=69.65, stdev=21.46 00:26:57.273 lat (msec): min=6, max=145, avg=69.66, stdev=21.46 00:26:57.273 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 55], 00:26:57.274 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 72], 00:26:57.274 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 117], 00:26:57.274 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 146], 00:26:57.274 | 99.99th=[ 146] 00:26:57.274 bw ( KiB/s): min= 640, max= 1152, per=3.89%, avg=894.32, stdev=171.92, samples=19 00:26:57.274 iops : min= 160, max= 288, avg=223.58, stdev=42.98, samples=19 00:26:57.274 lat (msec) : 10=0.26%, 50=12.99%, 100=77.49%, 250=9.26% 00:26:57.274 cpu : usr=43.73%, sys=0.72%, ctx=1242, majf=0, minf=9 00:26:57.274 IO depths : 1=3.2%, 2=6.9%, 4=16.9%, 8=63.0%, 16=10.0%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=92.0%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92061: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=211, BW=848KiB/s (868kB/s)(8480KiB/10005msec) 00:26:57.274 slat (usec): min=4, max=8033, avg=28.05, stdev=347.85 00:26:57.274 clat (msec): min=20, max=142, avg=75.33, stdev=21.66 00:26:57.274 lat (msec): min=20, max=142, avg=75.36, stdev=21.66 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 59], 00:26:57.274 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 81], 00:26:57.274 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 118], 00:26:57.274 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:26:57.274 | 99.99th=[ 142] 00:26:57.274 bw ( KiB/s): min= 512, max= 1152, per=3.63%, avg=834.53, stdev=139.49, samples=19 00:26:57.274 iops : min= 128, max= 288, avg=208.63, stdev=34.87, samples=19 00:26:57.274 lat (msec) : 50=9.95%, 100=77.08%, 250=12.97% 00:26:57.274 cpu : usr=33.39%, sys=0.38%, ctx=892, majf=0, minf=9 00:26:57.274 IO depths : 1=2.2%, 2=5.5%, 4=16.2%, 8=65.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92062: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=234, BW=937KiB/s (959kB/s)(9392KiB/10026msec) 00:26:57.274 slat (usec): min=6, max=8044, avg=26.04, stdev=309.50 00:26:57.274 clat (msec): min=14, max=153, avg=68.12, stdev=22.32 00:26:57.274 lat (msec): min=14, max=153, avg=68.15, stdev=22.33 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 50], 00:26:57.274 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:26:57.274 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 114], 00:26:57.274 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 00:26:57.274 | 99.99th=[ 155] 00:26:57.274 bw ( KiB/s): min= 768, max= 1296, per=4.06%, avg=932.80, stdev=157.91, samples=20 00:26:57.274 iops : min= 192, max= 324, avg=233.20, stdev=39.48, samples=20 00:26:57.274 lat (msec) : 20=0.38%, 50=21.38%, 100=69.59%, 250=8.65% 00:26:57.274 cpu : usr=37.51%, sys=0.57%, ctx=1171, majf=0, minf=9 00:26:57.274 IO depths : 1=2.0%, 2=4.3%, 4=12.6%, 8=69.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92063: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=224, BW=900KiB/s (921kB/s)(9020KiB/10026msec) 00:26:57.274 slat (usec): min=6, max=8034, avg=19.13, stdev=206.87 00:26:57.274 clat (msec): min=24, max=171, avg=70.93, stdev=22.56 00:26:57.274 lat (msec): min=24, max=171, avg=70.95, stdev=22.56 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 53], 00:26:57.274 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 75], 00:26:57.274 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 110], 00:26:57.274 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 171], 99.95th=[ 171], 00:26:57.274 | 99.99th=[ 171] 00:26:57.274 bw ( KiB/s): min= 552, max= 1200, per=3.89%, avg=895.65, stdev=181.07, samples=20 00:26:57.274 iops : min= 138, max= 300, avg=223.90, stdev=45.27, samples=20 00:26:57.274 lat (msec) : 50=16.54%, 100=73.97%, 250=9.49% 00:26:57.274 cpu : usr=38.63%, sys=0.40%, ctx=1272, majf=0, minf=9 00:26:57.274 IO depths : 1=0.9%, 2=2.1%, 4=9.4%, 8=74.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=89.8%, 8=6.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92064: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=257, BW=1031KiB/s (1056kB/s)(10.1MiB/10002msec) 00:26:57.274 slat (usec): min=3, max=8022, avg=19.90, stdev=220.52 00:26:57.274 clat (msec): min=4, max=147, avg=61.91, stdev=20.72 00:26:57.274 lat (msec): min=4, max=147, avg=61.93, stdev=20.72 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 48], 00:26:57.274 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 65], 00:26:57.274 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 95], 00:26:57.274 | 99.00th=[ 117], 99.50th=[ 126], 99.90th=[ 148], 99.95th=[ 148], 00:26:57.274 | 99.99th=[ 148] 00:26:57.274 bw ( KiB/s): min= 768, max= 1664, per=4.45%, avg=1022.63, stdev=197.79, samples=19 00:26:57.274 iops : min= 192, max= 416, avg=255.63, stdev=49.46, samples=19 00:26:57.274 lat (msec) : 10=2.48%, 50=26.13%, 100=67.70%, 250=3.68% 00:26:57.274 cpu : usr=37.85%, sys=0.41%, ctx=1237, majf=0, minf=9 00:26:57.274 IO depths : 1=1.4%, 2=3.1%, 4=10.7%, 8=72.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92065: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=227, BW=908KiB/s (930kB/s)(9100KiB/10017msec) 00:26:57.274 slat (usec): min=4, max=1044, avg=12.73, stdev=22.76 00:26:57.274 clat (msec): min=19, max=142, avg=70.37, stdev=20.70 00:26:57.274 lat (msec): min=19, max=142, avg=70.38, stdev=20.70 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 54], 00:26:57.274 | 30.00th=[ 57], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 72], 00:26:57.274 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 110], 00:26:57.274 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 131], 00:26:57.274 | 99.99th=[ 142] 00:26:57.274 bw ( KiB/s): min= 608, max= 1232, per=3.93%, avg=903.65, stdev=162.08, samples=20 00:26:57.274 iops : min= 152, max= 308, avg=225.90, stdev=40.52, samples=20 00:26:57.274 lat (msec) : 20=0.22%, 50=15.52%, 100=74.46%, 250=9.80% 00:26:57.274 cpu : usr=46.15%, sys=0.41%, ctx=1336, majf=0, minf=9 00:26:57.274 IO depths : 1=1.8%, 2=4.0%, 4=11.4%, 8=70.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=90.8%, 8=5.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92066: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=243, BW=974KiB/s (997kB/s)(9764KiB/10026msec) 00:26:57.274 slat (usec): min=6, max=8035, avg=22.53, stdev=256.53 00:26:57.274 clat (msec): min=24, max=123, avg=65.47, stdev=19.44 00:26:57.274 lat (msec): min=24, max=123, avg=65.49, stdev=19.44 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 50], 00:26:57.274 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 70], 00:26:57.274 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 92], 95.00th=[ 104], 00:26:57.274 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 124], 99.95th=[ 124], 00:26:57.274 | 99.99th=[ 124] 00:26:57.274 bw ( KiB/s): min= 768, max= 1248, per=4.24%, avg=974.40, stdev=128.96, samples=20 00:26:57.274 iops : min= 192, max= 312, avg=243.60, stdev=32.24, samples=20 00:26:57.274 lat (msec) : 50=22.53%, 100=71.69%, 250=5.78% 00:26:57.274 cpu : usr=41.51%, sys=0.56%, ctx=1094, majf=0, minf=9 00:26:57.274 IO depths : 1=1.4%, 2=3.2%, 4=12.8%, 8=70.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.274 filename2: (groupid=0, jobs=1): err= 0: pid=92067: Thu Nov 21 02:43:35 2024 00:26:57.274 read: IOPS=246, BW=986KiB/s (1010kB/s)(9908KiB/10048msec) 00:26:57.274 slat (usec): min=6, max=8021, avg=18.27, stdev=227.63 00:26:57.274 clat (msec): min=4, max=146, avg=64.72, stdev=23.44 00:26:57.274 lat (msec): min=4, max=146, avg=64.74, stdev=23.44 00:26:57.274 clat percentiles (msec): 00:26:57.274 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:26:57.274 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 68], 00:26:57.274 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 97], 95.00th=[ 108], 00:26:57.274 | 99.00th=[ 129], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:26:57.274 | 99.99th=[ 148] 00:26:57.274 bw ( KiB/s): min= 640, max= 1444, per=4.28%, avg=983.80, stdev=190.40, samples=20 00:26:57.274 iops : min= 160, max= 361, avg=245.95, stdev=47.60, samples=20 00:26:57.274 lat (msec) : 10=2.50%, 20=0.08%, 50=25.68%, 100=62.86%, 250=8.88% 00:26:57.274 cpu : usr=36.54%, sys=0.49%, ctx=992, majf=0, minf=9 00:26:57.274 IO depths : 1=1.3%, 2=2.9%, 4=10.3%, 8=73.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:57.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.274 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.275 filename2: (groupid=0, jobs=1): err= 0: pid=92068: Thu Nov 21 02:43:35 2024 00:26:57.275 read: IOPS=225, BW=901KiB/s (923kB/s)(9016KiB/10005msec) 00:26:57.275 slat (usec): min=4, max=4031, avg=13.49, stdev=84.95 00:26:57.275 clat (msec): min=7, max=134, avg=70.94, stdev=21.96 00:26:57.275 lat (msec): min=7, max=134, avg=70.95, stdev=21.96 00:26:57.275 clat percentiles (msec): 00:26:57.275 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 52], 00:26:57.275 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 77], 00:26:57.275 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 108], 00:26:57.275 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:26:57.275 | 99.99th=[ 136] 00:26:57.275 bw ( KiB/s): min= 640, max= 1248, per=3.89%, avg=895.16, stdev=146.62, samples=19 00:26:57.275 iops : min= 160, max= 312, avg=223.79, stdev=36.65, samples=19 00:26:57.275 lat (msec) : 10=0.71%, 50=17.13%, 100=70.67%, 250=11.49% 00:26:57.275 cpu : usr=34.60%, sys=0.52%, ctx=902, majf=0, minf=9 00:26:57.275 IO depths : 1=2.0%, 2=4.4%, 4=12.7%, 8=69.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:57.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.275 complete : 0=0.0%, 4=90.9%, 8=4.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.275 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:57.275 00:26:57.275 Run status group 0 (all jobs): 00:26:57.275 READ: bw=22.4MiB/s (23.5MB/s), 848KiB/s-1157KiB/s (868kB/s-1185kB/s), io=225MiB (236MB), run=10002-10049msec 00:26:57.275 02:43:36 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:57.275 02:43:36 -- target/dif.sh@43 -- # local sub 00:26:57.275 02:43:36 -- target/dif.sh@45 -- # for sub in "$@" 00:26:57.275 02:43:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:57.275 02:43:36 -- target/dif.sh@36 -- # local sub_id=0 00:26:57.275 02:43:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@45 -- # for sub in "$@" 00:26:57.275 02:43:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:57.275 02:43:36 -- target/dif.sh@36 -- # local sub_id=1 00:26:57.275 02:43:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@45 -- # for sub in "$@" 00:26:57.275 02:43:36 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:57.275 02:43:36 -- target/dif.sh@36 -- # local sub_id=2 00:26:57.275 02:43:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:57.275 02:43:36 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:57.275 02:43:36 -- target/dif.sh@115 -- # numjobs=2 00:26:57.275 02:43:36 -- target/dif.sh@115 -- # iodepth=8 00:26:57.275 02:43:36 -- target/dif.sh@115 -- # runtime=5 00:26:57.275 02:43:36 -- target/dif.sh@115 -- # files=1 00:26:57.275 02:43:36 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:57.275 02:43:36 -- target/dif.sh@28 -- # local sub 00:26:57.275 02:43:36 -- target/dif.sh@30 -- # for sub in "$@" 00:26:57.275 02:43:36 -- target/dif.sh@31 -- # create_subsystem 0 00:26:57.275 02:43:36 -- target/dif.sh@18 -- # local sub_id=0 00:26:57.275 02:43:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 bdev_null0 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 [2024-11-21 02:43:36.172126] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@30 -- # for sub in "$@" 00:26:57.275 02:43:36 -- target/dif.sh@31 -- # create_subsystem 1 00:26:57.275 02:43:36 -- target/dif.sh@18 -- # local sub_id=1 00:26:57.275 02:43:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 bdev_null1 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.275 02:43:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.275 02:43:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.275 02:43:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.275 02:43:36 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:57.275 02:43:36 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:57.275 02:43:36 -- target/dif.sh@82 -- # gen_fio_conf 00:26:57.275 02:43:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.275 02:43:36 -- target/dif.sh@54 -- # local file 00:26:57.275 02:43:36 -- target/dif.sh@56 -- # cat 00:26:57.275 02:43:36 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.275 02:43:36 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:57.275 02:43:36 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:57.275 02:43:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:57.275 02:43:36 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:57.275 02:43:36 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:57.275 02:43:36 -- common/autotest_common.sh@1330 -- # shift 00:26:57.275 02:43:36 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:57.275 02:43:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:57.275 02:43:36 -- nvmf/common.sh@520 -- # config=() 00:26:57.275 02:43:36 -- nvmf/common.sh@520 -- # local subsystem config 00:26:57.275 02:43:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:57.275 02:43:36 -- target/dif.sh@72 -- # (( file <= files )) 00:26:57.275 02:43:36 -- target/dif.sh@73 -- # cat 00:26:57.275 02:43:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:57.275 02:43:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:57.275 { 00:26:57.275 "params": { 00:26:57.275 "name": "Nvme$subsystem", 00:26:57.275 "trtype": "$TEST_TRANSPORT", 00:26:57.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.275 "adrfam": "ipv4", 00:26:57.275 "trsvcid": "$NVMF_PORT", 00:26:57.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.275 "hdgst": ${hdgst:-false}, 00:26:57.275 "ddgst": ${ddgst:-false} 00:26:57.275 }, 00:26:57.275 "method": "bdev_nvme_attach_controller" 00:26:57.275 } 00:26:57.275 EOF 00:26:57.275 )") 00:26:57.275 02:43:36 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:57.275 02:43:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:57.275 02:43:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:57.275 02:43:36 -- nvmf/common.sh@542 -- # cat 00:26:57.275 02:43:36 -- target/dif.sh@72 -- # (( file++ )) 00:26:57.275 02:43:36 -- target/dif.sh@72 -- # (( file <= files )) 00:26:57.275 02:43:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:57.275 02:43:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:57.275 { 00:26:57.275 "params": { 00:26:57.275 "name": "Nvme$subsystem", 00:26:57.275 "trtype": "$TEST_TRANSPORT", 00:26:57.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.275 "adrfam": "ipv4", 00:26:57.276 "trsvcid": "$NVMF_PORT", 00:26:57.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.276 "hdgst": ${hdgst:-false}, 00:26:57.276 "ddgst": ${ddgst:-false} 00:26:57.276 }, 00:26:57.276 "method": "bdev_nvme_attach_controller" 00:26:57.276 } 00:26:57.276 EOF 00:26:57.276 )") 00:26:57.276 02:43:36 -- nvmf/common.sh@542 -- # cat 00:26:57.276 02:43:36 -- nvmf/common.sh@544 -- # jq . 00:26:57.276 02:43:36 -- nvmf/common.sh@545 -- # IFS=, 00:26:57.276 02:43:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:57.276 "params": { 00:26:57.276 "name": "Nvme0", 00:26:57.276 "trtype": "tcp", 00:26:57.276 "traddr": "10.0.0.2", 00:26:57.276 "adrfam": "ipv4", 00:26:57.276 "trsvcid": "4420", 00:26:57.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:57.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:57.276 "hdgst": false, 00:26:57.276 "ddgst": false 00:26:57.276 }, 00:26:57.276 "method": "bdev_nvme_attach_controller" 00:26:57.276 },{ 00:26:57.276 "params": { 00:26:57.276 "name": "Nvme1", 00:26:57.276 "trtype": "tcp", 00:26:57.276 "traddr": "10.0.0.2", 00:26:57.276 "adrfam": "ipv4", 00:26:57.276 "trsvcid": "4420", 00:26:57.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.276 "hdgst": false, 00:26:57.276 "ddgst": false 00:26:57.276 }, 00:26:57.276 "method": "bdev_nvme_attach_controller" 00:26:57.276 }' 00:26:57.276 02:43:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:57.276 02:43:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:57.276 02:43:36 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:57.276 02:43:36 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:57.276 02:43:36 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:57.276 02:43:36 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:57.276 02:43:36 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:57.276 02:43:36 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:57.276 02:43:36 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:57.276 02:43:36 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:57.276 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:57.276 ... 00:26:57.276 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:57.276 ... 00:26:57.276 fio-3.35 00:26:57.276 Starting 4 threads 00:26:57.276 [2024-11-21 02:43:36.920175] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:57.276 [2024-11-21 02:43:36.920241] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:01.464 00:27:01.464 filename0: (groupid=0, jobs=1): err= 0: pid=92206: Thu Nov 21 02:43:42 2024 00:27:01.464 read: IOPS=2312, BW=18.1MiB/s (18.9MB/s)(90.4MiB/5002msec) 00:27:01.464 slat (nsec): min=5821, max=84144, avg=14481.64, stdev=9944.57 00:27:01.464 clat (usec): min=1882, max=4821, avg=3398.43, stdev=169.05 00:27:01.464 lat (usec): min=1892, max=4841, avg=3412.91, stdev=167.07 00:27:01.464 clat percentiles (usec): 00:27:01.464 | 1.00th=[ 3064], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3294], 00:27:01.464 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:27:01.464 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3589], 95.00th=[ 3687], 00:27:01.464 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4490], 99.95th=[ 4555], 00:27:01.464 | 99.99th=[ 4817] 00:27:01.464 bw ( KiB/s): min=18048, max=18996, per=24.96%, avg=18480.44, stdev=329.85, samples=9 00:27:01.464 iops : min= 2256, max= 2374, avg=2310.00, stdev=41.13, samples=9 00:27:01.464 lat (msec) : 2=0.02%, 4=99.26%, 10=0.73% 00:27:01.464 cpu : usr=95.06%, sys=3.60%, ctx=23, majf=0, minf=10 00:27:01.464 IO depths : 1=10.6%, 2=24.6%, 4=50.4%, 8=14.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:01.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 issued rwts: total=11568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:01.464 filename0: (groupid=0, jobs=1): err= 0: pid=92207: Thu Nov 21 02:43:42 2024 00:27:01.464 read: IOPS=2312, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5001msec) 00:27:01.464 slat (usec): min=3, max=123, avg=21.69, stdev= 9.53 00:27:01.464 clat (usec): min=963, max=5726, avg=3352.50, stdev=180.30 00:27:01.464 lat (usec): min=970, max=5743, avg=3374.19, stdev=180.13 00:27:01.464 clat percentiles (usec): 00:27:01.464 | 1.00th=[ 3064], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3228], 00:27:01.464 | 30.00th=[ 3261], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3359], 00:27:01.464 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3523], 95.00th=[ 3621], 00:27:01.464 | 99.00th=[ 3884], 99.50th=[ 4015], 99.90th=[ 5211], 99.95th=[ 5735], 00:27:01.464 | 99.99th=[ 5735] 00:27:01.464 bw ( KiB/s): min=18048, max=18996, per=24.95%, avg=18475.56, stdev=335.21, samples=9 00:27:01.464 iops : min= 2256, max= 2374, avg=2309.33, stdev=41.87, samples=9 00:27:01.464 lat (usec) : 1000=0.03% 00:27:01.464 lat (msec) : 2=0.03%, 4=99.39%, 10=0.54% 00:27:01.464 cpu : usr=95.52%, sys=3.34%, ctx=5, majf=0, minf=9 00:27:01.464 IO depths : 1=11.4%, 2=24.7%, 4=50.3%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:01.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 issued rwts: total=11563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:01.464 filename1: (groupid=0, jobs=1): err= 0: pid=92208: Thu Nov 21 02:43:42 2024 00:27:01.464 read: IOPS=2311, BW=18.1MiB/s (18.9MB/s)(90.3MiB/5001msec) 00:27:01.464 slat (nsec): min=6031, max=95268, avg=21192.75, stdev=9701.53 00:27:01.464 clat (usec): min=1253, max=6129, avg=3361.99, stdev=170.10 00:27:01.464 lat (usec): min=1270, max=6136, avg=3383.18, stdev=170.03 00:27:01.464 clat percentiles (usec): 00:27:01.464 | 1.00th=[ 3064], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261], 00:27:01.464 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3326], 60.00th=[ 3359], 00:27:01.464 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3556], 95.00th=[ 3621], 00:27:01.464 | 99.00th=[ 3916], 99.50th=[ 4047], 99.90th=[ 4948], 99.95th=[ 5342], 00:27:01.464 | 99.99th=[ 5735] 00:27:01.464 bw ( KiB/s): min=17920, max=18944, per=24.95%, avg=18474.67, stdev=368.43, samples=9 00:27:01.464 iops : min= 2240, max= 2368, avg=2309.33, stdev=46.05, samples=9 00:27:01.464 lat (msec) : 2=0.01%, 4=99.42%, 10=0.57% 00:27:01.464 cpu : usr=95.02%, sys=3.44%, ctx=60, majf=0, minf=9 00:27:01.464 IO depths : 1=6.7%, 2=25.0%, 4=50.0%, 8=18.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:01.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 issued rwts: total=11560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:01.464 filename1: (groupid=0, jobs=1): err= 0: pid=92209: Thu Nov 21 02:43:42 2024 00:27:01.464 read: IOPS=2319, BW=18.1MiB/s (19.0MB/s)(90.6MiB/5002msec) 00:27:01.464 slat (nsec): min=5837, max=76703, avg=9257.43, stdev=6062.82 00:27:01.464 clat (usec): min=837, max=4716, avg=3404.91, stdev=179.91 00:27:01.464 lat (usec): min=844, max=4748, avg=3414.16, stdev=180.53 00:27:01.464 clat percentiles (usec): 00:27:01.464 | 1.00th=[ 3130], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3326], 00:27:01.464 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:27:01.464 | 70.00th=[ 3458], 80.00th=[ 3490], 90.00th=[ 3556], 95.00th=[ 3654], 00:27:01.464 | 99.00th=[ 3884], 99.50th=[ 4015], 99.90th=[ 4359], 99.95th=[ 4621], 00:27:01.464 | 99.99th=[ 4686] 00:27:01.464 bw ( KiB/s): min=18048, max=18996, per=25.03%, avg=18533.22, stdev=338.74, samples=9 00:27:01.464 iops : min= 2256, max= 2374, avg=2316.56, stdev=42.31, samples=9 00:27:01.464 lat (usec) : 1000=0.09% 00:27:01.464 lat (msec) : 2=0.16%, 4=99.24%, 10=0.52% 00:27:01.464 cpu : usr=94.92%, sys=3.78%, ctx=3, majf=0, minf=0 00:27:01.464 IO depths : 1=9.5%, 2=23.5%, 4=51.4%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:01.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.464 issued rwts: total=11600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:01.464 00:27:01.464 Run status group 0 (all jobs): 00:27:01.464 READ: bw=72.3MiB/s (75.8MB/s), 18.1MiB/s-18.1MiB/s (18.9MB/s-19.0MB/s), io=362MiB (379MB), run=5001-5002msec 00:27:01.723 02:43:42 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:01.723 02:43:42 -- target/dif.sh@43 -- # local sub 00:27:01.723 02:43:42 -- target/dif.sh@45 -- # for sub in "$@" 00:27:01.723 02:43:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:01.723 02:43:42 -- target/dif.sh@36 -- # local sub_id=0 00:27:01.723 02:43:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:01.723 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.723 02:43:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:01.723 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.723 02:43:42 -- target/dif.sh@45 -- # for sub in "$@" 00:27:01.723 02:43:42 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:01.723 02:43:42 -- target/dif.sh@36 -- # local sub_id=1 00:27:01.723 02:43:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.723 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.723 02:43:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:01.723 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.723 00:27:01.723 real 0m23.698s 00:27:01.723 user 2m8.483s 00:27:01.723 sys 0m3.485s 00:27:01.723 02:43:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 ************************************ 00:27:01.723 END TEST fio_dif_rand_params 00:27:01.723 ************************************ 00:27:01.723 02:43:42 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:01.723 02:43:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.723 02:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 ************************************ 00:27:01.723 START TEST fio_dif_digest 00:27:01.723 ************************************ 00:27:01.723 02:43:42 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:27:01.723 02:43:42 -- target/dif.sh@123 -- # local NULL_DIF 00:27:01.723 02:43:42 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:01.723 02:43:42 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:01.723 02:43:42 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:01.723 02:43:42 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:01.723 02:43:42 -- target/dif.sh@127 -- # numjobs=3 00:27:01.723 02:43:42 -- target/dif.sh@127 -- # iodepth=3 00:27:01.723 02:43:42 -- target/dif.sh@127 -- # runtime=10 00:27:01.723 02:43:42 -- target/dif.sh@128 -- # hdgst=true 00:27:01.723 02:43:42 -- target/dif.sh@128 -- # ddgst=true 00:27:01.723 02:43:42 -- target/dif.sh@130 -- # create_subsystems 0 00:27:01.723 02:43:42 -- target/dif.sh@28 -- # local sub 00:27:01.723 02:43:42 -- target/dif.sh@30 -- # for sub in "$@" 00:27:01.723 02:43:42 -- target/dif.sh@31 -- # create_subsystem 0 00:27:01.723 02:43:42 -- target/dif.sh@18 -- # local sub_id=0 00:27:01.723 02:43:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:01.723 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.723 bdev_null0 00:27:01.723 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.723 02:43:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:01.723 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.723 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.982 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.982 02:43:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:01.982 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.982 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.982 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.982 02:43:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:01.982 02:43:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.982 02:43:42 -- common/autotest_common.sh@10 -- # set +x 00:27:01.982 [2024-11-21 02:43:42.385844] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.982 02:43:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.982 02:43:42 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:01.982 02:43:42 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:01.982 02:43:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:01.982 02:43:42 -- nvmf/common.sh@520 -- # config=() 00:27:01.982 02:43:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:01.982 02:43:42 -- nvmf/common.sh@520 -- # local subsystem config 00:27:01.982 02:43:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:01.982 02:43:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:01.982 02:43:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:01.982 { 00:27:01.982 "params": { 00:27:01.982 "name": "Nvme$subsystem", 00:27:01.982 "trtype": "$TEST_TRANSPORT", 00:27:01.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.983 "adrfam": "ipv4", 00:27:01.983 "trsvcid": "$NVMF_PORT", 00:27:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.983 "hdgst": ${hdgst:-false}, 00:27:01.983 "ddgst": ${ddgst:-false} 00:27:01.983 }, 00:27:01.983 "method": "bdev_nvme_attach_controller" 00:27:01.983 } 00:27:01.983 EOF 00:27:01.983 )") 00:27:01.983 02:43:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:01.983 02:43:42 -- target/dif.sh@82 -- # gen_fio_conf 00:27:01.983 02:43:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:01.983 02:43:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:01.983 02:43:42 -- target/dif.sh@54 -- # local file 00:27:01.983 02:43:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:01.983 02:43:42 -- common/autotest_common.sh@1330 -- # shift 00:27:01.983 02:43:42 -- target/dif.sh@56 -- # cat 00:27:01.983 02:43:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:01.983 02:43:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:01.983 02:43:42 -- nvmf/common.sh@542 -- # cat 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:01.983 02:43:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:01.983 02:43:42 -- target/dif.sh@72 -- # (( file <= files )) 00:27:01.983 02:43:42 -- nvmf/common.sh@544 -- # jq . 00:27:01.983 02:43:42 -- nvmf/common.sh@545 -- # IFS=, 00:27:01.983 02:43:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:01.983 "params": { 00:27:01.983 "name": "Nvme0", 00:27:01.983 "trtype": "tcp", 00:27:01.983 "traddr": "10.0.0.2", 00:27:01.983 "adrfam": "ipv4", 00:27:01.983 "trsvcid": "4420", 00:27:01.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:01.983 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:01.983 "hdgst": true, 00:27:01.983 "ddgst": true 00:27:01.983 }, 00:27:01.983 "method": "bdev_nvme_attach_controller" 00:27:01.983 }' 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:01.983 02:43:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:01.983 02:43:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:01.983 02:43:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:01.983 02:43:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:01.983 02:43:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:01.983 02:43:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:01.983 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:01.983 ... 00:27:01.983 fio-3.35 00:27:01.983 Starting 3 threads 00:27:02.549 [2024-11-21 02:43:42.968569] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:02.549 [2024-11-21 02:43:42.968644] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:12.558 00:27:12.558 filename0: (groupid=0, jobs=1): err= 0: pid=92312: Thu Nov 21 02:43:53 2024 00:27:12.558 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(352MiB/10007msec) 00:27:12.558 slat (nsec): min=6257, max=86192, avg=15017.46, stdev=5953.22 00:27:12.558 clat (usec): min=6461, max=52549, avg=10645.77, stdev=5607.81 00:27:12.558 lat (usec): min=6479, max=52567, avg=10660.79, stdev=5607.87 00:27:12.558 clat percentiles (usec): 00:27:12.558 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:27:12.558 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:27:12.558 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11338], 00:27:12.558 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:27:12.558 | 99.99th=[52691] 00:27:12.558 bw ( KiB/s): min=26624, max=40448, per=35.95%, avg=35895.84, stdev=3419.56, samples=19 00:27:12.558 iops : min= 208, max= 316, avg=280.42, stdev=26.71, samples=19 00:27:12.558 lat (msec) : 10=55.56%, 20=42.52%, 50=0.64%, 100=1.28% 00:27:12.558 cpu : usr=92.80%, sys=5.21%, ctx=9, majf=0, minf=9 00:27:12.558 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.558 issued rwts: total=2815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.558 filename0: (groupid=0, jobs=1): err= 0: pid=92313: Thu Nov 21 02:43:53 2024 00:27:12.558 read: IOPS=230, BW=28.9MiB/s (30.3MB/s)(289MiB/10006msec) 00:27:12.558 slat (nsec): min=6312, max=80766, avg=17933.30, stdev=5859.38 00:27:12.558 clat (usec): min=3851, max=17082, avg=12966.36, stdev=1990.00 00:27:12.558 lat (usec): min=3861, max=17102, avg=12984.29, stdev=1990.79 00:27:12.558 clat percentiles (usec): 00:27:12.558 | 1.00th=[ 7439], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[12649], 00:27:12.558 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:27:12.558 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[15008], 00:27:12.558 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16909], 99.95th=[16909], 00:27:12.558 | 99.99th=[17171] 00:27:12.558 bw ( KiB/s): min=26624, max=34304, per=29.72%, avg=29679.42, stdev=2180.51, samples=19 00:27:12.558 iops : min= 208, max= 268, avg=231.84, stdev=17.04, samples=19 00:27:12.558 lat (msec) : 4=0.13%, 10=11.55%, 20=88.32% 00:27:12.558 cpu : usr=94.90%, sys=3.72%, ctx=12, majf=0, minf=0 00:27:12.558 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.558 issued rwts: total=2311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.558 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.558 filename0: (groupid=0, jobs=1): err= 0: pid=92314: Thu Nov 21 02:43:53 2024 00:27:12.558 read: IOPS=267, BW=33.5MiB/s (35.1MB/s)(335MiB/10007msec) 00:27:12.558 slat (usec): min=5, max=307, avg=14.48, stdev= 8.85 00:27:12.558 clat (usec): min=5576, max=52733, avg=11178.01, stdev=2495.57 00:27:12.558 lat (usec): min=5586, max=52753, avg=11192.49, stdev=2495.27 00:27:12.558 clat percentiles (usec): 00:27:12.558 | 1.00th=[ 6456], 5.00th=[ 7046], 10.00th=[ 8094], 20.00th=[10421], 00:27:12.558 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:27:12.558 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12649], 95.00th=[13173], 00:27:12.558 | 99.00th=[14091], 99.50th=[14484], 99.90th=[52167], 99.95th=[52167], 00:27:12.558 | 99.99th=[52691] 00:27:12.558 bw ( KiB/s): min=31488, max=40448, per=34.45%, avg=34404.16, stdev=2218.14, samples=19 00:27:12.558 iops : min= 246, max= 316, avg=268.74, stdev=17.35, samples=19 00:27:12.558 lat (msec) : 10=14.29%, 20=85.49%, 50=0.07%, 100=0.15% 00:27:12.558 cpu : usr=93.00%, sys=4.88%, ctx=134, majf=0, minf=9 00:27:12.558 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.558 issued rwts: total=2681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.559 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:12.559 00:27:12.559 Run status group 0 (all jobs): 00:27:12.559 READ: bw=97.5MiB/s (102MB/s), 28.9MiB/s-35.2MiB/s (30.3MB/s-36.9MB/s), io=976MiB (1023MB), run=10006-10007msec 00:27:12.817 02:43:53 -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:12.817 02:43:53 -- target/dif.sh@43 -- # local sub 00:27:12.817 02:43:53 -- target/dif.sh@45 -- # for sub in "$@" 00:27:12.817 02:43:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:12.817 02:43:53 -- target/dif.sh@36 -- # local sub_id=0 00:27:12.817 02:43:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.817 02:43:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.817 02:43:53 -- common/autotest_common.sh@10 -- # set +x 00:27:12.817 02:43:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.817 02:43:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:12.817 02:43:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.817 02:43:53 -- common/autotest_common.sh@10 -- # set +x 00:27:12.817 02:43:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.817 00:27:12.817 real 0m10.965s 00:27:12.817 user 0m28.699s 00:27:12.817 sys 0m1.634s 00:27:12.817 02:43:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:12.817 ************************************ 00:27:12.817 END TEST fio_dif_digest 00:27:12.818 02:43:53 -- common/autotest_common.sh@10 -- # set +x 00:27:12.818 ************************************ 00:27:12.818 02:43:53 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:12.818 02:43:53 -- target/dif.sh@147 -- # nvmftestfini 00:27:12.818 02:43:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:12.818 02:43:53 -- nvmf/common.sh@116 -- # sync 00:27:12.818 02:43:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:12.818 02:43:53 -- nvmf/common.sh@119 -- # set +e 00:27:12.818 02:43:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:12.818 02:43:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:12.818 rmmod nvme_tcp 00:27:12.818 rmmod nvme_fabrics 00:27:12.818 rmmod nvme_keyring 00:27:12.818 02:43:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:13.076 02:43:53 -- nvmf/common.sh@123 -- # set -e 00:27:13.076 02:43:53 -- nvmf/common.sh@124 -- # return 0 00:27:13.076 02:43:53 -- nvmf/common.sh@477 -- # '[' -n 91539 ']' 00:27:13.076 02:43:53 -- nvmf/common.sh@478 -- # killprocess 91539 00:27:13.076 02:43:53 -- common/autotest_common.sh@936 -- # '[' -z 91539 ']' 00:27:13.076 02:43:53 -- common/autotest_common.sh@940 -- # kill -0 91539 00:27:13.076 02:43:53 -- common/autotest_common.sh@941 -- # uname 00:27:13.076 02:43:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:13.076 02:43:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91539 00:27:13.076 02:43:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:13.076 02:43:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:13.076 killing process with pid 91539 00:27:13.076 02:43:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91539' 00:27:13.076 02:43:53 -- common/autotest_common.sh@955 -- # kill 91539 00:27:13.076 02:43:53 -- common/autotest_common.sh@960 -- # wait 91539 00:27:13.335 02:43:53 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:13.335 02:43:53 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:13.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:13.594 Waiting for block devices as requested 00:27:13.852 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.853 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:13.853 02:43:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:13.853 02:43:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:13.853 02:43:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:13.853 02:43:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:13.853 02:43:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.853 02:43:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:13.853 02:43:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.853 02:43:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:13.853 00:27:13.853 real 1m0.569s 00:27:13.853 user 3m52.820s 00:27:13.853 sys 0m13.725s 00:27:13.853 02:43:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:13.853 02:43:54 -- common/autotest_common.sh@10 -- # set +x 00:27:13.853 ************************************ 00:27:13.853 END TEST nvmf_dif 00:27:13.853 ************************************ 00:27:14.112 02:43:54 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:14.112 02:43:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:14.112 02:43:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:14.112 02:43:54 -- common/autotest_common.sh@10 -- # set +x 00:27:14.112 ************************************ 00:27:14.112 START TEST nvmf_abort_qd_sizes 00:27:14.112 ************************************ 00:27:14.112 02:43:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:14.112 * Looking for test storage... 00:27:14.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:14.112 02:43:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:14.112 02:43:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:14.112 02:43:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:14.112 02:43:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:14.112 02:43:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:14.112 02:43:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:14.112 02:43:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:14.112 02:43:54 -- scripts/common.sh@335 -- # IFS=.-: 00:27:14.112 02:43:54 -- scripts/common.sh@335 -- # read -ra ver1 00:27:14.112 02:43:54 -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.112 02:43:54 -- scripts/common.sh@336 -- # read -ra ver2 00:27:14.112 02:43:54 -- scripts/common.sh@337 -- # local 'op=<' 00:27:14.112 02:43:54 -- scripts/common.sh@339 -- # ver1_l=2 00:27:14.112 02:43:54 -- scripts/common.sh@340 -- # ver2_l=1 00:27:14.112 02:43:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:14.112 02:43:54 -- scripts/common.sh@343 -- # case "$op" in 00:27:14.112 02:43:54 -- scripts/common.sh@344 -- # : 1 00:27:14.112 02:43:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:14.112 02:43:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.112 02:43:54 -- scripts/common.sh@364 -- # decimal 1 00:27:14.112 02:43:54 -- scripts/common.sh@352 -- # local d=1 00:27:14.112 02:43:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.112 02:43:54 -- scripts/common.sh@354 -- # echo 1 00:27:14.112 02:43:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:14.112 02:43:54 -- scripts/common.sh@365 -- # decimal 2 00:27:14.112 02:43:54 -- scripts/common.sh@352 -- # local d=2 00:27:14.112 02:43:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.112 02:43:54 -- scripts/common.sh@354 -- # echo 2 00:27:14.112 02:43:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:14.112 02:43:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:14.112 02:43:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:14.112 02:43:54 -- scripts/common.sh@367 -- # return 0 00:27:14.112 02:43:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.112 02:43:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.112 --rc genhtml_branch_coverage=1 00:27:14.112 --rc genhtml_function_coverage=1 00:27:14.112 --rc genhtml_legend=1 00:27:14.112 --rc geninfo_all_blocks=1 00:27:14.112 --rc geninfo_unexecuted_blocks=1 00:27:14.112 00:27:14.112 ' 00:27:14.112 02:43:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.112 --rc genhtml_branch_coverage=1 00:27:14.112 --rc genhtml_function_coverage=1 00:27:14.112 --rc genhtml_legend=1 00:27:14.112 --rc geninfo_all_blocks=1 00:27:14.112 --rc geninfo_unexecuted_blocks=1 00:27:14.112 00:27:14.112 ' 00:27:14.112 02:43:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.112 --rc genhtml_branch_coverage=1 00:27:14.112 --rc genhtml_function_coverage=1 00:27:14.112 --rc genhtml_legend=1 00:27:14.112 --rc geninfo_all_blocks=1 00:27:14.112 --rc geninfo_unexecuted_blocks=1 00:27:14.112 00:27:14.112 ' 00:27:14.112 02:43:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:14.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.112 --rc genhtml_branch_coverage=1 00:27:14.112 --rc genhtml_function_coverage=1 00:27:14.112 --rc genhtml_legend=1 00:27:14.112 --rc geninfo_all_blocks=1 00:27:14.112 --rc geninfo_unexecuted_blocks=1 00:27:14.112 00:27:14.112 ' 00:27:14.112 02:43:54 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:14.112 02:43:54 -- nvmf/common.sh@7 -- # uname -s 00:27:14.112 02:43:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.112 02:43:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.112 02:43:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.112 02:43:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.112 02:43:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.112 02:43:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.112 02:43:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.112 02:43:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.112 02:43:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.112 02:43:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.112 02:43:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b 00:27:14.112 02:43:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ee5cedd-dd41-4909-beeb-515afd20d67b 00:27:14.112 02:43:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.112 02:43:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.112 02:43:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:14.112 02:43:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:14.112 02:43:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.112 02:43:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.112 02:43:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.112 02:43:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.112 02:43:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.112 02:43:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.112 02:43:54 -- paths/export.sh@5 -- # export PATH 00:27:14.112 02:43:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.112 02:43:54 -- nvmf/common.sh@46 -- # : 0 00:27:14.112 02:43:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:14.112 02:43:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:14.112 02:43:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:14.112 02:43:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.112 02:43:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.112 02:43:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:14.112 02:43:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:14.112 02:43:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:14.112 02:43:54 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:27:14.112 02:43:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:14.112 02:43:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.112 02:43:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:14.112 02:43:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:14.112 02:43:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:14.112 02:43:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.112 02:43:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:14.112 02:43:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.112 02:43:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:27:14.112 02:43:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:27:14.112 02:43:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:27:14.112 02:43:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:27:14.112 02:43:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:27:14.112 02:43:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:27:14.112 02:43:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.112 02:43:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.112 02:43:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:14.112 02:43:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:27:14.112 02:43:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:14.112 02:43:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:14.112 02:43:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:14.112 02:43:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.112 02:43:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:14.112 02:43:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:14.112 02:43:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:14.112 02:43:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:14.112 02:43:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:27:14.112 02:43:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:27:14.113 Cannot find device "nvmf_tgt_br" 00:27:14.113 02:43:54 -- nvmf/common.sh@154 -- # true 00:27:14.113 02:43:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:27:14.371 Cannot find device "nvmf_tgt_br2" 00:27:14.371 02:43:54 -- nvmf/common.sh@155 -- # true 00:27:14.371 02:43:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:27:14.371 02:43:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:27:14.371 Cannot find device "nvmf_tgt_br" 00:27:14.371 02:43:54 -- nvmf/common.sh@157 -- # true 00:27:14.371 02:43:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:27:14.371 Cannot find device "nvmf_tgt_br2" 00:27:14.371 02:43:54 -- nvmf/common.sh@158 -- # true 00:27:14.371 02:43:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:27:14.371 02:43:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:27:14.371 02:43:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:14.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:14.371 02:43:54 -- nvmf/common.sh@161 -- # true 00:27:14.371 02:43:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:14.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:14.371 02:43:54 -- nvmf/common.sh@162 -- # true 00:27:14.371 02:43:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:27:14.371 02:43:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:14.371 02:43:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:14.371 02:43:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:14.371 02:43:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:14.371 02:43:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:14.371 02:43:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:14.371 02:43:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:14.371 02:43:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:14.371 02:43:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:27:14.371 02:43:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:27:14.371 02:43:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:27:14.371 02:43:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:27:14.371 02:43:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:14.371 02:43:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:14.371 02:43:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:14.371 02:43:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:27:14.371 02:43:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:27:14.371 02:43:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:27:14.371 02:43:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:14.371 02:43:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:14.630 02:43:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:14.630 02:43:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:14.630 02:43:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:27:14.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:27:14.630 00:27:14.630 --- 10.0.0.2 ping statistics --- 00:27:14.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.630 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:14.630 02:43:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:27:14.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:14.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:27:14.630 00:27:14.630 --- 10.0.0.3 ping statistics --- 00:27:14.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.630 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:14.630 02:43:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:14.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:27:14.630 00:27:14.630 --- 10.0.0.1 ping statistics --- 00:27:14.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.630 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:27:14.630 02:43:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.630 02:43:55 -- nvmf/common.sh@421 -- # return 0 00:27:14.630 02:43:55 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:14.630 02:43:55 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:15.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:15.198 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:15.457 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:27:15.457 02:43:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.457 02:43:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:15.457 02:43:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:15.457 02:43:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.457 02:43:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:15.457 02:43:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:15.457 02:43:55 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:27:15.457 02:43:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:15.457 02:43:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:15.457 02:43:55 -- common/autotest_common.sh@10 -- # set +x 00:27:15.457 02:43:55 -- nvmf/common.sh@469 -- # nvmfpid=92916 00:27:15.457 02:43:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:15.457 02:43:55 -- nvmf/common.sh@470 -- # waitforlisten 92916 00:27:15.457 02:43:55 -- common/autotest_common.sh@829 -- # '[' -z 92916 ']' 00:27:15.457 02:43:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.457 02:43:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.457 02:43:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.457 02:43:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.457 02:43:55 -- common/autotest_common.sh@10 -- # set +x 00:27:15.457 [2024-11-21 02:43:56.011887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:15.457 [2024-11-21 02:43:56.011975] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.716 [2024-11-21 02:43:56.155084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.716 [2024-11-21 02:43:56.272143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:15.716 [2024-11-21 02:43:56.273089] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.716 [2024-11-21 02:43:56.273370] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.716 [2024-11-21 02:43:56.273689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.716 [2024-11-21 02:43:56.274140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.716 [2024-11-21 02:43:56.274285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.716 [2024-11-21 02:43:56.274946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.716 [2024-11-21 02:43:56.274963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.651 02:43:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.651 02:43:57 -- common/autotest_common.sh@862 -- # return 0 00:27:16.651 02:43:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:16.651 02:43:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:16.651 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.651 02:43:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.651 02:43:57 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:16.651 02:43:57 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:27:16.651 02:43:57 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:27:16.651 02:43:57 -- scripts/common.sh@311 -- # local bdf bdfs 00:27:16.651 02:43:57 -- scripts/common.sh@312 -- # local nvmes 00:27:16.651 02:43:57 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:27:16.651 02:43:57 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:16.651 02:43:57 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:27:16.652 02:43:57 -- scripts/common.sh@297 -- # local bdf= 00:27:16.652 02:43:57 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:27:16.652 02:43:57 -- scripts/common.sh@232 -- # local class 00:27:16.652 02:43:57 -- scripts/common.sh@233 -- # local subclass 00:27:16.652 02:43:57 -- scripts/common.sh@234 -- # local progif 00:27:16.652 02:43:57 -- scripts/common.sh@235 -- # printf %02x 1 00:27:16.652 02:43:57 -- scripts/common.sh@235 -- # class=01 00:27:16.652 02:43:57 -- scripts/common.sh@236 -- # printf %02x 8 00:27:16.652 02:43:57 -- scripts/common.sh@236 -- # subclass=08 00:27:16.652 02:43:57 -- scripts/common.sh@237 -- # printf %02x 2 00:27:16.652 02:43:57 -- scripts/common.sh@237 -- # progif=02 00:27:16.652 02:43:57 -- scripts/common.sh@239 -- # hash lspci 00:27:16.652 02:43:57 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:27:16.652 02:43:57 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:27:16.652 02:43:57 -- scripts/common.sh@242 -- # grep -i -- -p02 00:27:16.652 02:43:57 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:16.652 02:43:57 -- scripts/common.sh@244 -- # tr -d '"' 00:27:16.652 02:43:57 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:16.652 02:43:57 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:27:16.652 02:43:57 -- scripts/common.sh@15 -- # local i 00:27:16.652 02:43:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:16.652 02:43:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:16.652 02:43:57 -- scripts/common.sh@24 -- # return 0 00:27:16.652 02:43:57 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:27:16.652 02:43:57 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:16.652 02:43:57 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:27:16.652 02:43:57 -- scripts/common.sh@15 -- # local i 00:27:16.652 02:43:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:27:16.652 02:43:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:16.652 02:43:57 -- scripts/common.sh@24 -- # return 0 00:27:16.652 02:43:57 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:27:16.652 02:43:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:16.652 02:43:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:27:16.652 02:43:57 -- scripts/common.sh@322 -- # uname -s 00:27:16.652 02:43:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:16.652 02:43:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:16.652 02:43:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:27:16.652 02:43:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:27:16.652 02:43:57 -- scripts/common.sh@322 -- # uname -s 00:27:16.652 02:43:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:27:16.652 02:43:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:27:16.652 02:43:57 -- scripts/common.sh@327 -- # (( 2 )) 00:27:16.652 02:43:57 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:27:16.652 02:43:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:16.652 02:43:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:16.652 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.652 ************************************ 00:27:16.652 START TEST spdk_target_abort 00:27:16.652 ************************************ 00:27:16.652 02:43:57 -- common/autotest_common.sh@1114 -- # spdk_target 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:27:16.652 02:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.652 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.652 spdk_targetn1 00:27:16.652 02:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.652 02:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.652 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.652 [2024-11-21 02:43:57.211262] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.652 02:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:27:16.652 02:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.652 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.652 02:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:27:16.652 02:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.652 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.652 02:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:27:16.652 02:43:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.652 02:43:57 -- common/autotest_common.sh@10 -- # set +x 00:27:16.652 [2024-11-21 02:43:57.243532] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.652 02:43:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:16.652 02:43:57 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:19.940 Initializing NVMe Controllers 00:27:19.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:19.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:19.940 Initialization complete. Launching workers. 00:27:19.940 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11218, failed: 0 00:27:19.940 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1166, failed to submit 10052 00:27:19.940 success 752, unsuccess 414, failed 0 00:27:19.940 02:44:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.940 02:44:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:23.227 Initializing NVMe Controllers 00:27:23.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:23.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:23.227 Initialization complete. Launching workers. 00:27:23.227 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5950, failed: 0 00:27:23.227 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1239, failed to submit 4711 00:27:23.227 success 259, unsuccess 980, failed 0 00:27:23.227 02:44:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:23.228 02:44:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:26.518 Initializing NVMe Controllers 00:27:26.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:26.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:26.518 Initialization complete. Launching workers. 00:27:26.518 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31847, failed: 0 00:27:26.518 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2602, failed to submit 29245 00:27:26.518 success 515, unsuccess 2087, failed 0 00:27:26.518 02:44:07 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:26.518 02:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.518 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:27:26.518 02:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.518 02:44:07 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:26.518 02:44:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.518 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:27:27.086 02:44:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.086 02:44:07 -- target/abort_qd_sizes.sh@62 -- # killprocess 92916 00:27:27.086 02:44:07 -- common/autotest_common.sh@936 -- # '[' -z 92916 ']' 00:27:27.086 02:44:07 -- common/autotest_common.sh@940 -- # kill -0 92916 00:27:27.086 02:44:07 -- common/autotest_common.sh@941 -- # uname 00:27:27.086 02:44:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:27.086 02:44:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92916 00:27:27.086 02:44:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:27.086 02:44:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:27.086 killing process with pid 92916 00:27:27.086 02:44:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92916' 00:27:27.086 02:44:07 -- common/autotest_common.sh@955 -- # kill 92916 00:27:27.086 02:44:07 -- common/autotest_common.sh@960 -- # wait 92916 00:27:27.345 00:27:27.345 real 0m10.635s 00:27:27.345 user 0m43.405s 00:27:27.345 sys 0m1.758s 00:27:27.345 02:44:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:27.346 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:27:27.346 ************************************ 00:27:27.346 END TEST spdk_target_abort 00:27:27.346 ************************************ 00:27:27.346 02:44:07 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:27.346 02:44:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:27.346 02:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:27.346 02:44:07 -- common/autotest_common.sh@10 -- # set +x 00:27:27.346 ************************************ 00:27:27.346 START TEST kernel_target_abort 00:27:27.346 ************************************ 00:27:27.346 02:44:07 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:27.346 02:44:07 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:27.346 02:44:07 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:27.346 02:44:07 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:27.346 02:44:07 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:27.346 02:44:07 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:27.346 02:44:07 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:27.346 02:44:07 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:27.346 02:44:07 -- nvmf/common.sh@627 -- # local block nvme 00:27:27.346 02:44:07 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:27.346 02:44:07 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:27.346 02:44:07 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:27.346 02:44:07 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:27.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:27.604 Waiting for block devices as requested 00:27:27.863 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:27.863 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:27.863 02:44:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:27.863 02:44:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:27.863 02:44:08 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:27.863 02:44:08 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:27.863 02:44:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:27.863 No valid GPT data, bailing 00:27:27.863 02:44:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:27.863 02:44:08 -- scripts/common.sh@393 -- # pt= 00:27:27.863 02:44:08 -- scripts/common.sh@394 -- # return 1 00:27:27.863 02:44:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:27.863 02:44:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:27.863 02:44:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:27.863 02:44:08 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:27.863 02:44:08 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:27.863 02:44:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:28.124 No valid GPT data, bailing 00:27:28.124 02:44:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:28.124 02:44:08 -- scripts/common.sh@393 -- # pt= 00:27:28.124 02:44:08 -- scripts/common.sh@394 -- # return 1 00:27:28.124 02:44:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:28.124 02:44:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:28.124 02:44:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:28.124 02:44:08 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:28.124 02:44:08 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:28.124 02:44:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:28.124 No valid GPT data, bailing 00:27:28.124 02:44:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:28.124 02:44:08 -- scripts/common.sh@393 -- # pt= 00:27:28.124 02:44:08 -- scripts/common.sh@394 -- # return 1 00:27:28.124 02:44:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:28.124 02:44:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:28.124 02:44:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:28.124 02:44:08 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:28.124 02:44:08 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:28.124 02:44:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:28.124 No valid GPT data, bailing 00:27:28.124 02:44:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:28.125 02:44:08 -- scripts/common.sh@393 -- # pt= 00:27:28.125 02:44:08 -- scripts/common.sh@394 -- # return 1 00:27:28.125 02:44:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:28.125 02:44:08 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:28.125 02:44:08 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:28.125 02:44:08 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:28.125 02:44:08 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:28.125 02:44:08 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:28.125 02:44:08 -- nvmf/common.sh@654 -- # echo 1 00:27:28.125 02:44:08 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:28.125 02:44:08 -- nvmf/common.sh@656 -- # echo 1 00:27:28.125 02:44:08 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:28.125 02:44:08 -- nvmf/common.sh@663 -- # echo tcp 00:27:28.125 02:44:08 -- nvmf/common.sh@664 -- # echo 4420 00:27:28.125 02:44:08 -- nvmf/common.sh@665 -- # echo ipv4 00:27:28.125 02:44:08 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:28.125 02:44:08 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2ee5cedd-dd41-4909-beeb-515afd20d67b --hostid=2ee5cedd-dd41-4909-beeb-515afd20d67b -a 10.0.0.1 -t tcp -s 4420 00:27:28.125 00:27:28.125 Discovery Log Number of Records 2, Generation counter 2 00:27:28.125 =====Discovery Log Entry 0====== 00:27:28.125 trtype: tcp 00:27:28.125 adrfam: ipv4 00:27:28.125 subtype: current discovery subsystem 00:27:28.125 treq: not specified, sq flow control disable supported 00:27:28.125 portid: 1 00:27:28.125 trsvcid: 4420 00:27:28.125 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:28.125 traddr: 10.0.0.1 00:27:28.125 eflags: none 00:27:28.125 sectype: none 00:27:28.125 =====Discovery Log Entry 1====== 00:27:28.125 trtype: tcp 00:27:28.125 adrfam: ipv4 00:27:28.125 subtype: nvme subsystem 00:27:28.125 treq: not specified, sq flow control disable supported 00:27:28.125 portid: 1 00:27:28.125 trsvcid: 4420 00:27:28.125 subnqn: kernel_target 00:27:28.125 traddr: 10.0.0.1 00:27:28.125 eflags: none 00:27:28.125 sectype: none 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:28.125 02:44:08 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:31.417 Initializing NVMe Controllers 00:27:31.417 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:31.417 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:31.417 Initialization complete. Launching workers. 00:27:31.417 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 33395, failed: 0 00:27:31.417 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33395, failed to submit 0 00:27:31.417 success 0, unsuccess 33395, failed 0 00:27:31.417 02:44:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:31.417 02:44:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:34.705 Initializing NVMe Controllers 00:27:34.705 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:34.705 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:34.705 Initialization complete. Launching workers. 00:27:34.705 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 80388, failed: 0 00:27:34.705 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34178, failed to submit 46210 00:27:34.705 success 0, unsuccess 34178, failed 0 00:27:34.705 02:44:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:34.705 02:44:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:37.994 Initializing NVMe Controllers 00:27:37.994 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:37.994 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:37.994 Initialization complete. Launching workers. 00:27:37.994 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 97649, failed: 0 00:27:37.994 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 24414, failed to submit 73235 00:27:37.994 success 0, unsuccess 24414, failed 0 00:27:37.994 02:44:18 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:37.994 02:44:18 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:37.994 02:44:18 -- nvmf/common.sh@677 -- # echo 0 00:27:37.994 02:44:18 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:37.994 02:44:18 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:37.994 02:44:18 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:37.994 02:44:18 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:37.994 02:44:18 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:37.994 02:44:18 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:37.994 00:27:37.994 real 0m10.483s 00:27:37.994 user 0m5.357s 00:27:37.994 sys 0m2.284s 00:27:37.994 02:44:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:37.994 02:44:18 -- common/autotest_common.sh@10 -- # set +x 00:27:37.994 ************************************ 00:27:37.994 END TEST kernel_target_abort 00:27:37.994 ************************************ 00:27:37.994 02:44:18 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:37.994 02:44:18 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:37.994 02:44:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:37.994 02:44:18 -- nvmf/common.sh@116 -- # sync 00:27:37.994 02:44:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:37.994 02:44:18 -- nvmf/common.sh@119 -- # set +e 00:27:37.994 02:44:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:37.994 02:44:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:37.994 rmmod nvme_tcp 00:27:37.994 rmmod nvme_fabrics 00:27:37.994 rmmod nvme_keyring 00:27:37.994 02:44:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:37.994 02:44:18 -- nvmf/common.sh@123 -- # set -e 00:27:37.994 02:44:18 -- nvmf/common.sh@124 -- # return 0 00:27:37.994 02:44:18 -- nvmf/common.sh@477 -- # '[' -n 92916 ']' 00:27:37.994 02:44:18 -- nvmf/common.sh@478 -- # killprocess 92916 00:27:37.994 02:44:18 -- common/autotest_common.sh@936 -- # '[' -z 92916 ']' 00:27:37.994 02:44:18 -- common/autotest_common.sh@940 -- # kill -0 92916 00:27:37.994 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92916) - No such process 00:27:37.994 Process with pid 92916 is not found 00:27:37.994 02:44:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 92916 is not found' 00:27:37.994 02:44:18 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:37.994 02:44:18 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:38.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:38.563 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:38.821 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:38.821 02:44:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:38.821 02:44:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:38.821 02:44:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.821 02:44:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:38.821 02:44:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.821 02:44:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:38.821 02:44:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.821 02:44:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:38.821 00:27:38.821 real 0m24.780s 00:27:38.821 user 0m50.274s 00:27:38.821 sys 0m5.433s 00:27:38.821 02:44:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:38.821 ************************************ 00:27:38.821 END TEST nvmf_abort_qd_sizes 00:27:38.821 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:27:38.821 ************************************ 00:27:38.821 02:44:19 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:38.821 02:44:19 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:38.821 02:44:19 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:38.821 02:44:19 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:38.821 02:44:19 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:38.821 02:44:19 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:38.821 02:44:19 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:38.821 02:44:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.821 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:27:38.821 02:44:19 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:38.821 02:44:19 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:38.821 02:44:19 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:38.821 02:44:19 -- common/autotest_common.sh@10 -- # set +x 00:27:40.726 INFO: APP EXITING 00:27:40.726 INFO: killing all VMs 00:27:40.726 INFO: killing vhost app 00:27:40.726 INFO: EXIT DONE 00:27:41.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:41.294 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:41.553 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:42.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:42.120 Cleaning 00:27:42.120 Removing: /var/run/dpdk/spdk0/config 00:27:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:42.120 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:42.120 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:42.120 Removing: /var/run/dpdk/spdk1/config 00:27:42.120 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:42.120 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:42.120 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:42.120 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:42.120 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:42.379 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:42.379 Removing: /var/run/dpdk/spdk2/config 00:27:42.379 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:42.379 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:42.379 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:42.379 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:42.379 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:42.379 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:42.379 Removing: /var/run/dpdk/spdk3/config 00:27:42.379 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:42.379 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:42.379 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:42.379 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:42.379 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:42.379 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:42.379 Removing: /var/run/dpdk/spdk4/config 00:27:42.379 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:42.379 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:42.379 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:42.379 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:42.379 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:42.379 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:42.379 Removing: /dev/shm/nvmf_trace.0 00:27:42.379 Removing: /dev/shm/spdk_tgt_trace.pid55507 00:27:42.379 Removing: /var/run/dpdk/spdk0 00:27:42.379 Removing: /var/run/dpdk/spdk1 00:27:42.379 Removing: /var/run/dpdk/spdk2 00:27:42.379 Removing: /var/run/dpdk/spdk3 00:27:42.379 Removing: /var/run/dpdk/spdk4 00:27:42.379 Removing: /var/run/dpdk/spdk_pid55349 00:27:42.380 Removing: /var/run/dpdk/spdk_pid55507 00:27:42.380 Removing: /var/run/dpdk/spdk_pid55834 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56103 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56286 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56376 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56475 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56577 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56621 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56651 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56714 00:27:42.380 Removing: /var/run/dpdk/spdk_pid56837 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57477 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57538 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57607 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57635 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57720 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57748 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57832 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57860 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57912 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57942 00:27:42.380 Removing: /var/run/dpdk/spdk_pid57988 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58018 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58187 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58218 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58304 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58369 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58399 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58463 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58477 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58517 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58531 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58571 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58591 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58625 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58645 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58679 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58699 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58733 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58753 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58793 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58807 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58847 00:27:42.380 Removing: /var/run/dpdk/spdk_pid58861 00:27:42.639 Removing: /var/run/dpdk/spdk_pid58901 00:27:42.639 Removing: /var/run/dpdk/spdk_pid58915 00:27:42.639 Removing: /var/run/dpdk/spdk_pid58955 00:27:42.639 Removing: /var/run/dpdk/spdk_pid58969 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59009 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59023 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59063 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59077 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59119 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59139 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59174 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59194 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59228 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59248 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59282 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59304 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59338 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59363 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59400 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59423 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59460 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59480 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59520 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59534 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59575 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59652 00:27:42.639 Removing: /var/run/dpdk/spdk_pid59770 00:27:42.639 Removing: /var/run/dpdk/spdk_pid60204 00:27:42.639 Removing: /var/run/dpdk/spdk_pid67178 00:27:42.639 Removing: /var/run/dpdk/spdk_pid67544 00:27:42.639 Removing: /var/run/dpdk/spdk_pid69966 00:27:42.639 Removing: /var/run/dpdk/spdk_pid70355 00:27:42.639 Removing: /var/run/dpdk/spdk_pid70625 00:27:42.639 Removing: /var/run/dpdk/spdk_pid70678 00:27:42.639 Removing: /var/run/dpdk/spdk_pid70946 00:27:42.639 Removing: /var/run/dpdk/spdk_pid70955 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71009 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71067 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71127 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71171 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71173 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71204 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71241 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71243 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71301 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71364 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71420 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71458 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71471 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71491 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71792 00:27:42.639 Removing: /var/run/dpdk/spdk_pid71950 00:27:42.639 Removing: /var/run/dpdk/spdk_pid72208 00:27:42.639 Removing: /var/run/dpdk/spdk_pid72258 00:27:42.639 Removing: /var/run/dpdk/spdk_pid72644 00:27:42.639 Removing: /var/run/dpdk/spdk_pid73178 00:27:42.639 Removing: /var/run/dpdk/spdk_pid73612 00:27:42.639 Removing: /var/run/dpdk/spdk_pid74589 00:27:42.639 Removing: /var/run/dpdk/spdk_pid75586 00:27:42.639 Removing: /var/run/dpdk/spdk_pid75704 00:27:42.639 Removing: /var/run/dpdk/spdk_pid75776 00:27:42.639 Removing: /var/run/dpdk/spdk_pid77271 00:27:42.639 Removing: /var/run/dpdk/spdk_pid77511 00:27:42.639 Removing: /var/run/dpdk/spdk_pid77963 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78073 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78219 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78265 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78316 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78356 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78520 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78673 00:27:42.639 Removing: /var/run/dpdk/spdk_pid78937 00:27:42.639 Removing: /var/run/dpdk/spdk_pid79059 00:27:42.639 Removing: /var/run/dpdk/spdk_pid79481 00:27:42.639 Removing: /var/run/dpdk/spdk_pid79867 00:27:42.639 Removing: /var/run/dpdk/spdk_pid79875 00:27:42.898 Removing: /var/run/dpdk/spdk_pid82140 00:27:42.898 Removing: /var/run/dpdk/spdk_pid82450 00:27:42.898 Removing: /var/run/dpdk/spdk_pid82975 00:27:42.898 Removing: /var/run/dpdk/spdk_pid82978 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83319 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83339 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83353 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83388 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83394 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83534 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83546 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83650 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83652 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83760 00:27:42.898 Removing: /var/run/dpdk/spdk_pid83762 00:27:42.898 Removing: /var/run/dpdk/spdk_pid84246 00:27:42.898 Removing: /var/run/dpdk/spdk_pid84298 00:27:42.898 Removing: /var/run/dpdk/spdk_pid84445 00:27:42.898 Removing: /var/run/dpdk/spdk_pid84572 00:27:42.898 Removing: /var/run/dpdk/spdk_pid84968 00:27:42.898 Removing: /var/run/dpdk/spdk_pid85220 00:27:42.898 Removing: /var/run/dpdk/spdk_pid85722 00:27:42.898 Removing: /var/run/dpdk/spdk_pid86286 00:27:42.898 Removing: /var/run/dpdk/spdk_pid86759 00:27:42.898 Removing: /var/run/dpdk/spdk_pid86855 00:27:42.898 Removing: /var/run/dpdk/spdk_pid86940 00:27:42.898 Removing: /var/run/dpdk/spdk_pid87031 00:27:42.898 Removing: /var/run/dpdk/spdk_pid87194 00:27:42.898 Removing: /var/run/dpdk/spdk_pid87279 00:27:42.898 Removing: /var/run/dpdk/spdk_pid87369 00:27:42.898 Removing: /var/run/dpdk/spdk_pid87454 00:27:42.898 Removing: /var/run/dpdk/spdk_pid87809 00:27:42.898 Removing: /var/run/dpdk/spdk_pid88506 00:27:42.898 Removing: /var/run/dpdk/spdk_pid89874 00:27:42.899 Removing: /var/run/dpdk/spdk_pid90081 00:27:42.899 Removing: /var/run/dpdk/spdk_pid90366 00:27:42.899 Removing: /var/run/dpdk/spdk_pid90677 00:27:42.899 Removing: /var/run/dpdk/spdk_pid91240 00:27:42.899 Removing: /var/run/dpdk/spdk_pid91245 00:27:42.899 Removing: /var/run/dpdk/spdk_pid91614 00:27:42.899 Removing: /var/run/dpdk/spdk_pid91778 00:27:42.899 Removing: /var/run/dpdk/spdk_pid91936 00:27:42.899 Removing: /var/run/dpdk/spdk_pid92035 00:27:42.899 Removing: /var/run/dpdk/spdk_pid92192 00:27:42.899 Removing: /var/run/dpdk/spdk_pid92302 00:27:42.899 Removing: /var/run/dpdk/spdk_pid92985 00:27:42.899 Removing: /var/run/dpdk/spdk_pid93015 00:27:42.899 Removing: /var/run/dpdk/spdk_pid93050 00:27:42.899 Removing: /var/run/dpdk/spdk_pid93299 00:27:42.899 Removing: /var/run/dpdk/spdk_pid93329 00:27:42.899 Removing: /var/run/dpdk/spdk_pid93364 00:27:42.899 Clean 00:27:43.158 killing process with pid 49751 00:27:43.158 killing process with pid 49753 00:27:43.158 02:44:23 -- common/autotest_common.sh@1446 -- # return 0 00:27:43.158 02:44:23 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:43.158 02:44:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:43.158 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:27:43.158 02:44:23 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:43.158 02:44:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:43.158 02:44:23 -- common/autotest_common.sh@10 -- # set +x 00:27:43.158 02:44:23 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:43.158 02:44:23 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:43.158 02:44:23 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:43.158 02:44:23 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:43.158 02:44:23 -- spdk/autotest.sh@383 -- # hostname 00:27:43.158 02:44:23 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:43.416 geninfo: WARNING: invalid characters removed from testname! 00:28:05.464 02:44:45 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:07.997 02:44:48 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:09.898 02:44:50 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:12.432 02:44:52 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:14.335 02:44:54 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:16.870 02:44:56 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:18.789 02:44:59 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:18.789 02:44:59 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:18.789 02:44:59 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:18.789 02:44:59 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:18.789 02:44:59 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:18.789 02:44:59 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:18.789 02:44:59 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:18.789 02:44:59 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:18.789 02:44:59 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:18.789 02:44:59 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:18.789 02:44:59 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:18.789 02:44:59 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:18.789 02:44:59 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:18.789 02:44:59 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:18.789 02:44:59 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:18.789 02:44:59 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:18.789 02:44:59 -- scripts/common.sh@343 -- $ case "$op" in 00:28:18.789 02:44:59 -- scripts/common.sh@344 -- $ : 1 00:28:18.789 02:44:59 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:18.789 02:44:59 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.789 02:44:59 -- scripts/common.sh@364 -- $ decimal 1 00:28:18.789 02:44:59 -- scripts/common.sh@352 -- $ local d=1 00:28:18.789 02:44:59 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:18.789 02:44:59 -- scripts/common.sh@354 -- $ echo 1 00:28:18.789 02:44:59 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:18.789 02:44:59 -- scripts/common.sh@365 -- $ decimal 2 00:28:18.789 02:44:59 -- scripts/common.sh@352 -- $ local d=2 00:28:18.789 02:44:59 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:18.789 02:44:59 -- scripts/common.sh@354 -- $ echo 2 00:28:18.789 02:44:59 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:18.789 02:44:59 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:18.789 02:44:59 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:18.790 02:44:59 -- scripts/common.sh@367 -- $ return 0 00:28:18.790 02:44:59 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.790 02:44:59 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:18.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.790 --rc genhtml_branch_coverage=1 00:28:18.790 --rc genhtml_function_coverage=1 00:28:18.790 --rc genhtml_legend=1 00:28:18.790 --rc geninfo_all_blocks=1 00:28:18.790 --rc geninfo_unexecuted_blocks=1 00:28:18.790 00:28:18.790 ' 00:28:18.790 02:44:59 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:18.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.790 --rc genhtml_branch_coverage=1 00:28:18.790 --rc genhtml_function_coverage=1 00:28:18.790 --rc genhtml_legend=1 00:28:18.790 --rc geninfo_all_blocks=1 00:28:18.790 --rc geninfo_unexecuted_blocks=1 00:28:18.790 00:28:18.790 ' 00:28:18.790 02:44:59 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:18.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.790 --rc genhtml_branch_coverage=1 00:28:18.790 --rc genhtml_function_coverage=1 00:28:18.790 --rc genhtml_legend=1 00:28:18.790 --rc geninfo_all_blocks=1 00:28:18.790 --rc geninfo_unexecuted_blocks=1 00:28:18.790 00:28:18.790 ' 00:28:18.790 02:44:59 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:18.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.790 --rc genhtml_branch_coverage=1 00:28:18.790 --rc genhtml_function_coverage=1 00:28:18.790 --rc genhtml_legend=1 00:28:18.790 --rc geninfo_all_blocks=1 00:28:18.790 --rc geninfo_unexecuted_blocks=1 00:28:18.790 00:28:18.790 ' 00:28:18.790 02:44:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:18.790 02:44:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:18.790 02:44:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.790 02:44:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.790 02:44:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.790 02:44:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.790 02:44:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.790 02:44:59 -- paths/export.sh@5 -- $ export PATH 00:28:18.790 02:44:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.790 02:44:59 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:18.790 02:44:59 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:18.790 02:44:59 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732157099.XXXXXX 00:28:18.790 02:44:59 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732157099.3DxlHB 00:28:18.790 02:44:59 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:18.790 02:44:59 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:28:18.790 02:44:59 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:18.790 02:44:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:18.790 02:44:59 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:18.790 02:44:59 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:18.790 02:44:59 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:18.790 02:44:59 -- common/autotest_common.sh@10 -- $ set +x 00:28:18.790 02:44:59 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:28:18.790 02:44:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:18.790 02:44:59 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:18.790 02:44:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:18.790 02:44:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:18.790 02:44:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:18.790 02:44:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:18.790 02:44:59 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:18.790 02:44:59 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:18.790 02:44:59 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:19.049 02:44:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:19.049 + [[ -n 5227 ]] 00:28:19.049 + sudo kill 5227 00:28:19.058 [Pipeline] } 00:28:19.073 [Pipeline] // timeout 00:28:19.078 [Pipeline] } 00:28:19.093 [Pipeline] // stage 00:28:19.099 [Pipeline] } 00:28:19.115 [Pipeline] // catchError 00:28:19.126 [Pipeline] stage 00:28:19.129 [Pipeline] { (Stop VM) 00:28:19.143 [Pipeline] sh 00:28:19.424 + vagrant halt 00:28:21.958 ==> default: Halting domain... 00:28:28.536 [Pipeline] sh 00:28:28.817 + vagrant destroy -f 00:28:32.102 ==> default: Removing domain... 00:28:32.115 [Pipeline] sh 00:28:32.396 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:32.404 [Pipeline] } 00:28:32.416 [Pipeline] // stage 00:28:32.421 [Pipeline] } 00:28:32.433 [Pipeline] // dir 00:28:32.438 [Pipeline] } 00:28:32.452 [Pipeline] // wrap 00:28:32.458 [Pipeline] } 00:28:32.471 [Pipeline] // catchError 00:28:32.480 [Pipeline] stage 00:28:32.482 [Pipeline] { (Epilogue) 00:28:32.495 [Pipeline] sh 00:28:32.776 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:38.059 [Pipeline] catchError 00:28:38.061 [Pipeline] { 00:28:38.074 [Pipeline] sh 00:28:38.385 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:38.433 Artifacts sizes are good 00:28:38.436 [Pipeline] } 00:28:38.454 [Pipeline] // catchError 00:28:38.467 [Pipeline] archiveArtifacts 00:28:38.476 Archiving artifacts 00:28:38.598 [Pipeline] cleanWs 00:28:38.631 [WS-CLEANUP] Deleting project workspace... 00:28:38.631 [WS-CLEANUP] Deferred wipeout is used... 00:28:38.638 [WS-CLEANUP] done 00:28:38.640 [Pipeline] } 00:28:38.655 [Pipeline] // stage 00:28:38.661 [Pipeline] } 00:28:38.675 [Pipeline] // node 00:28:38.681 [Pipeline] End of Pipeline 00:28:38.721 Finished: SUCCESS